Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/ # GPU
per job
max walltime

max running jobs n. of nodes/cores/mem per user/

max n. of nodes /cores per useraccount

prioritynotes

lrd_all_serial

(default)

normal

max = 4 physical cores
(8 logical cpus)

max mem = 30800 MB

04:00:001 node / 4 cores  / 30800 MB40No GPUs
Hyperthreading x2


dcgp_usr_prod


normalmax = 16 nodes24:00:00512 nodes per account40
dcgp_qos_dbgmax = 2 nodes00:30:00

2 nodes / 224

cores 

cores per user

512 nodes per account

80
dcgp_qos_bprod

min = 17 nodes

max =128 nodes

24:00:00

128 nodes per user

512 nodes per account

60runs on 1536 nodes
min is 17 FULL nodes
dcgp_qos_lprod

max = 3 nodes

4-00:00:00

3 nodes / 336 cores per user

512 nodes per account

40

Note: a maximum of 512 nodes per account is also imposed on the dcgp_usr_prod partition, meaning that, for each account, all the jobs associated with it cannot run on more than 512 nodes at the same time (if you submit a job that imply to exceed this limitation, it will stay pending until a

...