Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cores/nodes/GPUs per user

prioritynotes


lrd_all_serial
(default)

not yet available

normalmax = 1 core, 1GPU04:00:004 cpus/1 GPU40
qos_installmax = 16 cores04:00:00max = 16 cores 
1 job per user
40request to superc@cineca.it


boost_usr_prod


normalmax = 32 nodes24:00:00
40runs on all nodes
boost_qos_dbgmax = 2 nodes0200:0030:002 nodes / 64 cores / 8 GPUs80runs on 24 nodes
boost_qos_bprod

min = 33 nodes

max =256 nodes *

24:00:00*256 nodes  nodes *60runs on 512 nodes
min is 33 FULL nodes
lrdboost_qos_lprod

max = 2 3 nodes

1004-00:00:003 nodes /12 GPUs40
  • *For the "boost_usr_prod" partition in the place of "prod". You can use at most 32 nodes on this partition (MaxTime=24:00:00). Please request the boost_qos_bprod QOS to go up to 512 nodes (MaxTime=10:00:00) This limit will be in place until May 25, when it will be reduced to 256 nodes with MaxTime=24:00:00 (production environment) before May 25.

...