...
Production environment
Since GALILEO100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that access to the resources is as fair as possible.
Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section "Production Environment".
Interactive
A serial program can be executed in the standard UNIX way:
> ./program
This is allowed only for very short runs, since the interactive environment set on the login nodes has a 10 minutes time limit: for longer runs please use the "batch" mode.
A parallel program can be executed interactively only by submitting an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.
For example, to start an interactive session with the MPI program "myprogram", using one node and two processors, you can launch the command:
> salloc -N 1 --ntasks-per-node=2 -A <account_name>
SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you. If not specified, the default time limit for this kind of jobs is one hour.
When the shell returns a prompt inside the compute node, you can execute your program by typing:
> srun ./myprogram
(srun is recommended with respect to mpirun for this environment)
SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. with specific library paths or options), you can prepare the environment in the login shell and be sure to find it again in the interactive shell o the compute node.
On systems using SLURM, you can submit a script script.x using the command:
> sbatch script.x
You can get a list of defined partitions with the command:
> sinfo
For more information and examples of job scripts, see section Batch Scheduler SLURM.
Submitting serial Batch Jobs
The partition will be available in the full production.
Graphic session
A configuration of the RCM environment is in progress. This guide will be completed as soon as a final configuration will be implemented.
Submitting parallel Batch Jobs
To run parallel batch jobs on GALILEO100 you need to specify the partition and the qos that are described in this user guide.
If you do not specify the partition, your jobs will try to run on the default partition g100_usr_prod.
The minimum number of cores you can request for a batch job is 1. The maximum number of cores that you can request is (16 nodes). It is also possible to request a maximum walltime of 24 hours. Defaults are as follows:
If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.
If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of 1 core will be assumed.
If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 7800 MBper core will be assumed.
The maximum memory per node is 375300MB (366.5GB) for thin and viz nodes, about 3TB for fat nodes.
Processor affinity:
Processor affinity, or CPU pinning, enables the binding of processes and threads to a CPU (or group of CPUs). It is crucial to ensure the correct affinity so to avoid the CPUs overallocation, with a significant reduction of performances. It becomes a critical matter when you ask for a full node but, for your specific reasons (memory needs etc.) you don't use all the cores.
The following indications work when running your executables with srun, which is the recommended option against mpirun. We refer to a hybrid MPI/OpenMP case.
Given your optimal value of OMP_NUM_THREADS and number of processes, to obtain the full node ask for a number of task such that ( --ntasks-per-node * --cpus-per-task )= 48.
- To avoid the processes overallocation of cores rely on the --cpu-bind=cores option of srun (you can skip it if you use all the requested cores)
- To enforce the threads affinity use the Intel parameter KMP_AFFINITY, or the OpenMP parameter OMP_PLACES
- To distribute the MPI tasks consecutively with respect to the sockets, use the -m block:block option of srun (or the equivalent sbatch directive #SBATCH -m block:block)
#!/bin/bash-
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12
#SBATCH
-ntasks
-
per-socket=6#SBATCH --cpus-per-task=4
#SBATCH --account=<your_account>
module load autoload intelmpi/oneapi-2021–binary
export OMP_NUM_THREADS=4
export KMP_AFFINITY=compact # or OMP_PLACES=cores
srun --cpu-bind=cores -m block:block <your_exe>
Use of GPUs on Galileo100
to be soon defined
Users with reserved resources
Users of projects that require reserved resources (such as industrial users or users associated to an agreement that involves dedicated resources) will be associated to a QOS qos_ind.
Using the qos_ind (i.e. specifying the QOS in the submission script) , and specifying the partition g100_spc_prod, users associated to the allowed project will run their jobs on reserved nodes in the g100_spc_prod partition with the features and limits imposed for the particular account.
>#SBATCH --partition=g100_spc_prod
>#SBATCH --qos=qos_ind
Summary
In the following table, you can find all the main features and limits imposed on the SLURM partitions and QOS.
SLURM partition | QOS | # cores per job | max walltime | max running jobs per user/ max n. of cpus/nodes per user | max memory per node (MB) | priority | notes |
g100_usr_interactive | noQOS | 2 nodes | 8:00:00 | / | 7800 | on nodes with GPUs | |
g100_usr_prod | noQOS g100_qos_dbg g100_qos_bprod | min = 1 max = 32 nodes min = 1 max = 96 (2 nodes) min = 1537 (33 nodes) max = (3072) 64 nodes | 24:00:00 02:00:00 24:00:00 | 375300 | 95 85 | runs on thin and fat nodes | |
g100_spc_prod | Every account have a valid QOS qos_ind to access this partition | Depending on the QOS used by the particular account | 24:00:00 | / | 375300 | Partition dedicated to specific kind of users. Runs on thin nodes | |
g100_meteo_prod | qos_meteo | 24:00:00 | 375300 | Partition reserved to meteo services, NOT opened to production Runs on thin nodes |
_ fino a qui_
...