Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.
  • If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of 36 will be assumed.
  • If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 3000MB will be assumed.
  • Even though you can ask up to 123000MB, we strongly suggest to limit the amount of the requested The maximim memory per node to 118000MB to avoid memory swapping to disk with serious performance degradationis 118000MB.

The special partition is designed for not-ordinary types of jobs, and users need to be enabled in order to use it. Please write to superc@cineca.it in case you think you need to use it.

...

The maximum memory which can be requested is 90 GB 86000MB for cache nodes. However, to avoid memory swapping to disk with the associated performance degradation we strongly suggest to use up to 86 GB for cache nodes.

For example, to request a single KNL node in a production queue the following SLURM job script can be used:

...

The maximum memory which can be requested is 180 GB 182000MB and this value guarantees that no memory swapping will occur.

...

123000123000

MARCONI

Partition

SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus per user

max memory per node

(MB)

priorityHBM/clustering modenotes

front-end

bdw_all_serial

(default partition)

bdw_all_serial104:00:00

Max 12 running jobs

Max 4 jobs/user

 3000   
A1bdw_all_rcmbdw_all_rcm

min = 1

max = 144

03:00:001/144123000

value suggested: 118000


   

runs on 24 nodes shared with

the debug queue

          
A1bdw_usr_dbgbdw_usr_dbg

min = 1

max = 144

02:00:004/144

 

123000118000

 

value suggested: 118000


  

managed by route

runs on 24 nodes shared with

the visualrcm queue

A1bdw_usr_prodbdw_usr_prod

min = 1

max = 2304

24:00:0020/2304

value suggested: 118000


   
  bdw_qos_bprod

min = 2305

max = 6000

24:00:00

1/6000

123000

value suggested: 118000


  

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_bprod

  bdw_qos_special

min = 1

max = 36

180:00:00 

value suggested: 118000


  

ask superc@cineca.it

#SBATCH -p bdw_usr_prod

#SBATCH --qos=bdw_qos_special

 

          
A2 knl_usr_dbgknl_usr_dbg

min = 1

max = 136 (2 nodes)

00:30:005/340

90000 86000 (mcdram=cache)value suggested: 86000


 

mcdram=cache

numa=quadrant

runs on 144 dedicated nodes

A2knl_usr_prodknl_usr_prod

min = 1

max = 13260 (195 nodes)

24:00:0020/68000

90000 86000 (mcdram=cache)value suggested: 86000


 

mcdram=cache

numa=quadrant


 

  knl_qos_bprod

min > 13260

max = 68000 (1000 nodes)

24:00:00

Max 1 jobs/user

Max 2 jobs/account

90000 86000 (mcdram=cache)value suggested: 86000

 

mcdram=cache

numa=quadrant

ask superc@cineca.it

#SBATCH -p knl_usr_prod

#SBATCH --qos=knl_qos_bprod

 

...