Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cpus/nodes/GPUs per user

prioritynotes

m100_all_serial

(def. partition)

normal

max = 1 core,
1 GPU

max mem= 7600MB

04:00:00

4 cpus/1 GPU


40
m100_usr_prodm100_qos_dbgmax = 2 nodes02:00:002 nodes/64cpus/8GPUs45runs on 12 nodes
m100_usr_prodnormalmax = 16 nodes24:00:00
40runs on 880 nodes
m100_qos_bprod

min = 17 nodes

max =256 nodes

24:00:00256 nodes85runs on 512 nodes
m100_usr_preemptnormalmax = 16 nodes24:00:00
1

runs on 99 nodes

m100_fua_prod

(EUROFUSION)

m100_qos_fuadbgmax = 2 nodes02:00:00
45

runs on 12 nodes


m100_fua_prod

(EUROFUSION)

normalmax = 16 nodes24:00:00
40

runs on 68 nodes



m100_qos_fuabprodmax = 32 nodes24:00:00
40run on 64 nodes at same time
all partitions

qos_special

> 32 nodes

> 24:00:00


40 

request to superc@cineca.it

all partitions

qos_lowprio

max = 16 nodes

24:00:00


0

active projects with exhausted budget

...