Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cpus/nodes/GPUs per user

prioritynotes

m100_all_serial

(def. partition)

normal

max = 1 core, 1 GPU

max mem= 7600MB

04:00:00

4 cpus/1 GPU


40

m100_fua_prod

m100_qos_fuadbgmax = 2 nodes02:00:00
45

runs on 12 nodes

m100_fua_prod

normalmax = 16 nodes24:00:00
40

runs on 68 nodes


m100_qos_fuabprodmax = 32 nodes24:00:00
40run on 64 nodes

qos_special>16 nodes

>24:00:00



40 
request to superc@cineca.it

qos_lowpriofualowpriomax = 16 nodes2408:00:00
0

active projects with exhausted budget

...