...
SLURM partition | QOS | # cores per job | max walltime | max running jobs per user/ max n. of cpus/nodes per user | max memory per node (MB) | priority | notes |
m100_all_serial (default partition) | noQOS | max = 6 (max mem= 18000 MB) | 04:00:00 | 6 cpus | 7000 | 40 | |
qos_rcm | min = 1 max = 48 | 03:00:00 | 1/48 | 182000 | - | to be defined | |
m100_usr_dbg | no QOSm100_qos_dbg | min = 1 node max = 4 nodes | 02:00:30:00 | 4/424 nodes | 182000 | 4045 | runs on 24 dedicated nodes |
m100_usr_prod | no QOS | min = 1 node max = 16 nodes | 1-00:00:00 | 64 nodes | 182000 | 40 | |
skl_qos_bprod | min=65 nodes max = 256 nodes | 24:00:00 | 1/256 1 jobs per account | 182000 | 85 | #SBATCH -p skl_usr_prod #SBATCH --qos=skl_qos_bprod | |
qos_special | >256 nodes | >24:00:00 (max = 64 nodes for user) | 182000 | 40 | #SBATCH --qos=qos_special request to superc@cineca.it | ||
qos_lowprio | max = 64 nodes | 24:00:00 | 64 nodes | 182000 | 0 | #SBATCH --qos=qos_lowprio | |
m100_usr_preempt | max = 16 nodes | 08:00:00 | 10 | ||||
m100_fua_prod | max = 16 nodes | 1-00:00:00 |
Graphic session
If a graphic session is desired we recommend to use the tool RCM (Remote Connection Manager). For additional information visit Remote Visualization section on our User Guide.
...