Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cpus/nodes/GPUs per user

prioritynotes

m100_all_serial

(def. partition)

normal

max = 1 core,
1 GPU

max mem= 7600MB

04:00:00

4 cpus/1 GPU


40
m100_usr_prodm100_qos_dbg

max = 2 nodes

02:00:002 nodes/64 cpus/
8 GPUs

45

runs on 12 nodes

m100_usr_prodnormal

max = 16 nodes

24:00:0010 jobs40

runs on 880 nodes



m100_qos_bprod

min = 17 nodes

max =256 nodes

24:00:00

256 nodes

85

runs on 256 nodes


m100_fua_prod

(EUROFUSION)

m100_qos_fuadbgmax = 2 nodes02:00:00
45

runs on 12 nodes


m100_fua_prod

(EUROFUSION)

normalmax = 16 nodes24:00:00
40

runs on 68 nodes



m100_qos_fuabprodmax = 32 nodes24:00:00
40run on 64 nodes

qos_special>16 nodes

>24:00:00



40 
request to superc@cineca.it

qos_lowpriomax = 16 nodes24:00:00
0

active projects with exhausted budget

...