Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In the following table you can find all the main features and limits imposed on the queues/Partitions of M100. 



SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus/nodes per user

max memory per node

(MB)

prioritynotes

m100_all_serial

(default partition)

noQOS

max = 6

(max mem= 18000 MB)

04:00:00

6 cpus

 18000


 700040

 qos_rcm

min = 1

max = 48

03:00:001/48

182000


- 

to be defined









m100_usr_dbgno QOS

min = 1 node

max = 4 nodes

00:30:004/418200040runs on 24 dedicated nodes
m100_usr_prodno QOS

min = 1 node

max =

64

16 nodes

24
1-00:00:0064 nodes18200040

skl_qos_bprod

min=65 nodes

max = 256 nodes

24:00:00

1/256

1 jobs per account

18200085

#SBATCH -p skl_usr_prod

#SBATCH --qos=skl_qos_bprod


qos_special>256 nodes

>24:00:00

(max = 64 nodes for user)


                          182000        40

#SBATCH --qos=qos_special

request to superc@cineca.it

qos_lowpriomax = 64 nodes24:00:0064 nodes1820000#SBATCH --qos=qos_lowprio
m100_usr_preempt
max = 16 nodes08:00:00

10




Graphic session


If a graphic session is desired we recommend to use the tool RCM (Remote Connection Manager)For additional information visit Remote Visualization section on our User Guide.

...