Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

40

SLURM

partition

Job QOS# cores/ # GPU
per job
max walltime

max running jobs n. of nodes/cores/mem per user/

max n. of nodes /cores per useraccount

prioritynotes

lrd_all_serial

(default)

normal

max = 4 physical cores
(8 logical cpus)

max mem = 30800 MB

04:00:001 node / 4 cores  / 30800 MB40No GPUs
Hyperthreading x2


dcgp_usr_prod


normalmax = 16 nodes24:00:00512 nodes per account40
dcgp_qos_dbgmax = 2 nodes00:30:00

2 nodes / 224 cores per user

512 nodes per account

80
dcgp_qos_bprod

min = 17 nodes

max =128 nodes

24:00:00

128

nodes 

nodes per user

512 nodes per account

60runs on 1536 nodes
min is 128 17 FULL nodes
dcgp_qos_lprod

max = 3 nodes

4-00:00:00

3 nodes

4-00:00:003 nodes / 336 cores

nodes / 336 cores per user

512 nodes per account

40

Note: a maximum of 512 nodes per account is also imposed on the dcgp_usr_prod partition, meaning that, for each account, all the jobs associated with it cannot run on more than 512 nodes at the same time (if you submit a job that imply to exceed this limitation, it will stay pending until a

Programming environment

LEONARDO Data Centric compute nodes are not provided with GPUs, thus applications running on GPUs can be used only on the Booster partition. The programming environment include a list of compilers and of debugger and profiler tools, suitable for programming on CPUs.

...

e.g. Compiling Fortran code:

$ module load intel-oneapi-compilers/<VERSION>
$ module load intel-oneapi-mpi/<version>
$ mpiifort -o myexec  myprog.f90 (uses the ifort compiler)

...