Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cores/nodes/GPUs per user

prioritynotes


m100_all_serial
(default)

normal

max = 1 core,
1 GPU

max mem= 7600MB

04:00:00

4 cpus/1 GPU


40
qos_installmax = 16 cores04:00:00max = 16 cores
1 job per user
40request to superc@cineca.it


m100_usr_prod

normalmax = 16 nodes24:00:00
40runs on 880 nodes
m100_qos_dbgmax = 2 nodes02:00:002 nodes/64cores/8GPUs80runs on 12 nodes
m100_qos_bprod

min = 17 nodes

max =256 nodes

24:00:00256 nodes60

runs on 512 nodes

min is 17 FULL nodes (544 cores, 2176  cpus)

m100_usr_preemptnormalmax = 16 nodes24:00:00
1

runs on 99 nodes


m100_fua_prod

(EUROFUSION)



normalmax = 16 nodes24:00:00
40

runs on 87 nodes


m100_qos_fuadbgmax = 2 nodes02:00:00
45

runs on 12 nodes


m100_qos_fuabprod

min = 17 nodes

max = 32 nodes

24:00:00
40run on 64 nodes at same time
all partitions

qos_special

> 32 nodes

> 24:00:00


40 

request to superc@cineca.it

all partitions
(NO EUROFUSION)

qos_lowprio

max = 16 nodes

24:00:00


0

active projects with exhausted budget

request to superc@cineca.it

...