...
SLURM partition | Job QOS | # cores/# GPU per job | max walltime | max running jobs per user/ max n. of cores/nodes/GPUs per user | priority | notes |
lrd_all_serial not yet available | normal | max = 1 core, 1GPU | 04:00:00 | 4 cpus/1 GPU | 40 | |
qos_install | max = 16 cores | 04:00:00 | max = 16 cores 1 job per user | 40 | request to superc@cineca.it | |
boost_usr_prod | normal | max = 32 nodes | 24:00:00 | 40 | runs on all nodes | |
boost_qos_dbg | max = 2 nodes | 0200:0030:00 | 2 nodes / 64 cores / 8 GPUs | 80 | runs on 24 nodes | |
boost_qos_bprod | min = 33 nodes max =256 nodes * | 24:00:00* | 256 nodes nodes * | 60 | runs on 512 nodes min is 33 FULL nodes | |
lrdboost_qos_lprod | max = 2 3 nodes | 1004-00:00:00 | 3 nodes /12 GPUs | 40 |
- *For the "boost_usr_prod" partition in the place of "prod". You can use at most 32 nodes on this partition (MaxTime=24:00:00). Please request the boost_qos_bprod QOS to go up to 512 nodes (MaxTime=10:00:00) This limit will be in place until May 25, when it will be reduced to 256 nodes with MaxTime=24:00:00 (production environment) before May 25.
...