Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

MARCONI

Partition

SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus/nodes per user

max memory per node

(MB)

priorityHBM/clustering modenotes

front-end

bdw_all_serial

(default partition)

noQOS

max = 6

(max mem= 18000 MB)

04:00:00

6 cpus


 1800040

qos_installmax = 1604:00:00max =16 cores/
1 job per user
100 GB40
request to superc@cineca.it
A1
 qos_rcm

min = 1

max = 48

03:00:001/48

182000


-
 

runs on 24 nodes shared with the debug queue on SKL

 








A2 knl_usr_dbgno QOS

min = 1 node

max = 2 nodes

00:30:005/5

86000 (cache)


40

runs on 144 dedicated nodes

A2knl_usr_prodno QOS

min = 1 node

max = 195 nodes

24:00:001000 nodes

86000 (cache)


40




knl_qos_bprod

min = 196 nodes

max = 1024 nodes

24:00:00

1/1000


86000 (cache)


85

#SBATCH -p knl_usr_prod

#SBATCH --qos=knl_qos_bprod



qos_special>1024 nodes       >24:00:00 (max = 195 nodes for user)                                      86000 (cache)        40

#SBATCH --qos=qos_special

request to superc@cineca.it











A3skl_usr_dbgno QOS

min = 1 node

max = 2 nodes

00:30:004/418200040

runs on 8 dedicated nodes

max 1 job per user

A3skl_usr_prodno QOS

min = 1 node

max = 32 nodes

24:00:0032 nodes18200040



skl_qos_bprod

min=33 nodes

max = 64 nodes

24:00:00

1/64

1 jobs per account

18200085

#SBATCH -p skl_usr_prod

#SBATCH --qos=skl_qos_bprod



qos_special>64 nodes

>24:00:00

(max = 64 nodes for user)


                     182000        40

#SBATCH --qos=qos_special

request to superc@cineca.it


qos_lowpriomax = 64 nodes24:00:0064 nodes1820000
#SBATCH --qos=qos_lowprio

...