Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In the following table you can find the main features and limits imposed on the partitions of M100.

Note: core refers to a physical cpu, with its 4 HTs; cpu refers to a logical cpu (1 HT). Each node has 32 cores/128 cpus.


SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cpuscores/nodes/GPUs per user

prioritynotes

m100_all_serial

(def. partition)

normal

max = 1 core,
1 GPU

max mem= 7600MB

04:00:00

4 cpus/1 GPU


40
m100_usr_prodnormalmax = 16 nodes24:00:00
40runs on 880 nodes

m100_qos_dbgmax = 2 nodes02:00:002 nodes/64cpus64cores/8GPUs4580runs on 12 nodes

m100_qos_bprod

min = 17 nodes

max =256 nodes

24:00:00256 nodes8560

runs on 512 nodes

min is 17 FULL nodes (544 cores, 2176 cpus)

m100_usr_preemptnormalmax = 16 nodes24:00:00
1

runs on 99 nodes

m100_fua_prod

(EUROFUSION)

normalmax = 16 nodes24:00:00
40

runs on 87 nodes



m100_qos_fuadbgmax = 2 nodes02:00:00
45

runs on 12 nodes



m100_qos_fuabprodmax = 32 nodes24:00:00
40run on 64 nodes at same time
all partitions

qos_special

> 32 nodes

> 24:00:00


40 

request to superc@cineca.it

all partitions

qos_lowprio

max = 16 nodes

24:00:00


0

active projects with exhausted budget

...