Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cores/nodes/GPUs per Grp

prioritynotes


boost_fua_prod


normalmax = 16 nodes24:00:00
40
boost_qos_fuadbgfuabprod

min = 17 nodes

max =

2

32 nodes

24:00:10:002 nodes 49 nodes  / 64 1568 cores / 8 196 GPUs8060runs on 7 49 nodes,
min is 17 FULL nodes

boost_

qos

fua_

fuabprod

dbg

normalmin = 17 nodes

max = 32 nodes1 node

2400:0010:0049 nodes  2 nodes / 1568 64 cores / 196 8 GPUs6040runs on 49 nodes,
min is 17 FULL 2 nodes


qos_fualowprio

max = 16 nodes

08:00:00
0
  • automatically added to the active accounts with exhausted budget
  • to be used with the LUAL7_LOWPRIO account

...