Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

MARCONI

Partition

SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus per user

max memory per jobpriorityHBM/clustering modenotes

front-end

bdw_all_serial

(default partition)

(not applicable)104:00:00

Max 12 running jobs

Max 4 jobs/user

 1 GB 30  
A1bdw_all_rcm(not applicable)

min = 1

max = 144

03:00:001/144

123 GB/node

value suggested: 118 GB/node

40  

runs on 24 nodes shared with

the debug queue

          
A1bdw_usr_dbg(not applicable)

min = 1

max = 144

02:00:004/144

123 GB/node

value suggested: 118 GB/node

40 

managed by route

runs on 24 nodes shared with

the visualrcm queue

A1bdw_usr_prod(not applicable)

min = 1

max = 2304

24:00:0020/2304

123 GB/node

value suggested: 118 GB/node

50  
A1 bdw_usr_prodbdw_qos_bprod

min = 2305

max = 6000

24:00:00

1/6000

123 GB/node

value suggested: 118 GB/node

60 

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_bprod

A1 bdw_usr_prod bdw_qos_special

min = 1

max = 36

180:00:00 

123 GB/node

value suggested: 118 GB/node

100 

ask superc@cineca.it

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_special

 

          
A2 knl_usr_dbg(not applicable)

min = 1

max = 136 (2 nodes)

00:30:005/340

90 GB/node (mcdram=cache)

value suggested: 86 GB/node

40

mcdram=cache

numa=quadrant

runs on 144 dedicated nodes

A2knl_usr_prod(not applicable)

min >136

max = 13260 (195 nodes)

24:00:0020/68000

90 GB/node (mcdram=cache)

value suggested: 86 GB/node

50

mcdram=cache

numa=quadrant


 

A2knl_usr_prod knl_qos_bprod

min > 13260

max = 68000 (1000 nodes)

24:00:00

Max 1 jobs/user

Max 2 jobs/account


90 GB/node (mcdram=cache)

value suggested: 86 GB/node

 

30

mcdram=cache

numa=quadrant

ask superc@cineca.it

#SBATCH -p knl_usr_prod

#SBATCH --qos=knl_qos_bprod

 

...