...
SLURM partition | Job QOS | # cores/ # GPU per job | max walltime | max running jobs n. of nodes/cores/mem per user/ max n. of nodes /cores per useraccount | priority | notes | ||
lrd_all_serial (default) | normal | max = 4 physical cores max mem = 30800 MB | 04:00:00 | 1 node / 4 cores / 30800 MB | 40 | No GPUs Hyperthreading x2 | ||
dcgp_usr_prod | normal | max = 16 nodes | 24:00:00 | 512 nodes per account | 40 | |||
dcgp_qos_dbg | max = 2 nodes | 00:30:00 | 2 nodes / 224 cores per user 512 nodes per account | 80 | ||||
dcgp_qos_bprod | min = 17 nodes max =128 nodes | 24:00:00 | 128 nodesnodes per user 512 nodes per account | 60 | runs on 1536 nodes min is 17 FULL nodes | |||
dcgp_qos_lprod | max = 3 nodes | 4-00:00:00 | 3 nodes | 4-00:00:00 | 3 nodes / 336 cores | 40nodes / 336 cores per user 512 nodes per account | 40 |
Note: a maximum of 512 nodes per account is also imposed on the dcgp_usr_prod partition, meaning that, for each account, all the jobs associated with it cannot run on more than 512 nodes at the same time (if you submit a job that imply to exceed this limitation, it will stay pending until a
Programming environment
LEONARDO Data Centric compute nodes are not provided with GPUs, thus applications running on GPUs can be used only on the Booster partition. The programming environment include a list of compilers and of debugger and profiler tools, suitable for programming on CPUs.
...
e.g. Compiling Fortran code:
$ module load intel-oneapi-compilers/<VERSION>
$ module load intel-oneapi-mpi/<version>
$ mpiifort -o myexec myprog.f90 (uses the ifort compiler)
...