Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

MARCONI

Partition

SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus/nodes per user

max memory per node

(MB)

priorityHBM/clustering modenotes

front-end

bdw_all_serial

(default partition)

noQOS

max = 6

(max mem= 18000 MB)

04:00:00

6 cpus

 

 18000   
A1bdw_all_rcmnoQOS

min = 1

max = 144

03:00:001/144

118000


   

runs on 24 nodes shared with

the debug queue

          
A1bdw_usr_dbgnoQOS

min = 1

max = 144

02:00:004/144

 

118000

 


  

managed by route

runs on 24 nodes shared with

the visualrcm queue

A1bdw_usr_prodnoQOS

min = 1

max = 2304

24:00:0020/2304

118000


   
  bdw_qos_bprod

min = 2305

max = 6000

24:00:00

1/6000

118000


  

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_bprod

  bdw_qos_special

 

 

> 24:00:00 

118000


  

ask superc@cineca.it

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_special

 

          
A2 knl_usr_dbgno QOS

min = 1 node

max = 2 nodes

00:30:005/5

86000 (cache/flat)


  

runs on 144 dedicated nodes

A2knl_usr_prodno QOS

min = 1 node

max = 195 nodes

24:00:001000 nodes

86000 (cache/flat)


  

 

  knl_qos_bprod

min = 196 nodes

max = 1000 1024 nodes

24:00:00

1/1000


86000 (cache/flat)


  

#SBATCH -p knl_usr_prod

#SBATCH --qos=knl_qos_bprod

          
A3skl_usr_dbgno QOS

min = 1 node

max = 4 nodes

00:30:004/4182000  runs on 24 dedicated nodes
A3skl_usr_prodno QOS

min = 1 node

max = 64 nodes

24:00:0064 nodes182000   
  skl_qos_bprod

min=65

max = 128

24:00:00

1/128

2 jobs per account

182000  

#SBATCH -p skl_usr_prod

#SBATCH --qos=skl_qos_bprod

...