...
Production environment
Since GALILEO100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that access to the resources is as fair as possible.
Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section "Production Environment".
Interactive
A serial program can be executed in the standard UNIX way:
> ./program
This is allowed only for very short runs, since the interactive environment set on the login nodes has a 10 minutes time limit: for longer runs please use the "batch" mode.
A parallel program can be executed interactively only by submitting an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.
For example, to start an interactive session with the MPI program "myprogram", using one node and two processors, you can launch the command:
> salloc -N 1 --ntasks-per-node=2 -A <account_name>
SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you. If not specified, the default time limit for this kind of jobs is one hour.
When the shell returns a prompt inside the compute node, you can execute your program by typing:
> srun ./myprogramThe default SLURM MPI type has been set equal to PMI2.
(srun is recommended with respect to mpirun for this environment)
SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. with specific library paths or options), you can prepare the environment in the login shell and be sure to find it again in the interactive shell o the compute node.
As usual, on On systems using SLURM, you can submit a script script.x using the command:
> sbatch script.x
You can get a list of defined partitions with the command:
> sinfo
For more information and examples of job scripts, see section Batch Scheduler SLURM.
Submitting serial Batch Jobs
The partition will be available in the full production.
Graphic session
A configuration of the RCM environment is in progress. This guide will be completed as soon as a final configuration will be implemented.
Submitting parallel Batch Jobs
To run parallel batch jobs on GALILEO100 you need to specify the partition and the qos that are described in this user guide.
If you do not specify the partition, your jobs will try to run on the default partition g100_usr_prod.
The minimum number of cores you can request for a batch job is 1. The maximum number of cores that you can request is (16 nodes). It is also possible to request a maximum walltime of 24 hours. Defaults are as follows:
If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.
If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of of 1 core will be assumed.
If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 3000MB 7800 MB per core will be assumed.
The maximum memory per node is 118000MB 375300MB (366.5GB) for thin and viz nodes, about 3TB for fat nodes.
Use of GPUs on Galileo100
to be soon defined
Users with reserved resources
Users of projects that require reserved resources (such as industrial users or users associated to an agreement that involves dedicated resources) will be associated to a QOS qos_ind.
Using the qos_ind (i.e. specifying the QOS in the submission script) , and specifying the partition g100_spc_prod, users associated to the allowed project will run their jobs on reserved nodes in the g100_spc_prod partition with the features and limits imposed for the particular account.
>#SBATCH --partition=g100_spc_prod
>#SBATCH --qos=qos_ind
Summary
In the following table, you can find all the main features and limits imposed on the SLURM partitions and QOS.
SLURM partition | QOS | # cores per job | max walltime | max running jobs per user/ max n. of cpus/nodes per user | max memory per node (MB) | priority | notes |
g100_usr_interactive | noQOS | 2 nodes | 8:00:00 | / | 7800 | on nodes with GPUs | |
g100_usr_prod | noQOS g100_qos_dbg g100_qos_bprod | min = 1 max = 32 nodes min = 1 max = 96 (2 nodes) min = 1537 (33 nodes) max = (3072) 64 nodes | 24:00:00 02:00:00 24:00:00 | 375300 | 95 85 | runs on thin and fat nodes | |
g100_spc_prod | Every account have a valid QOS qos_ind to access this partition | Depending on the QOS used by the particular account | 24:00:00 | / | 375300 | Partition dedicated to specific kind of users. Runs on thin nodes | |
g100_meteo_prod | qos_meteo | 24:00:00 | 375300 | Partition reserved to meteo services, NOT opened to production Runs on thin nodes |
_ fino a qui_
Programming environment
The programming environment of GALILEO consists of a choice of compilers for the main scientific languages (Fortran, C and C++), debuggers to help users in finding bugs and errors in their codes, profilers to help with code optimization. In general, you must "load" also the correct environment for using programming tools like compilers, since "native" compilers are not available.
...
Compilers
The native compiler is the Intel one. On the cluster is installed the new suite Intel OneAPI.
> module load intel
> module list
Currently Loaded Modulefiles:
intel/oneapi-2021–binary
The suite contains the new Intel oneAPI compilers (icx, icpx, ifx), and also the classic compilers (icc, icpc, ifort, ...).
The use of the classic compilers is suggested in this first phase for a smoother migration from other clusters.
In principle, binaries generated on Galileo should work, but we strongly recommend you to reinstall all your software applications since on Galileo100 there is a different Operating System (Centos 8.3)
Compilers
You can check the complete list of available compilers on GALILEO with the command:
> module available
checking the "compilers" section.
In general, the available compilers are:
- INTEL (ifort, icc, icpc) : ► module load intel
- PGI - Portland Group (pgf77,pgf90,pgf95,pghpf, pgcc, pgCC): ► module load pgi (profile/advanced)
- GNU (gcc, g77, g95): ► module load gnu
After loading the appropriate module, use the "man" command to get the complete list of the flags supported by the compiler, for example:
> module load intel
> man ifort
There are some flags that are common for all these compilers. Others are more specific. The most common are reported later for each compiler.
- If you want to use a specific library or a particular include file, you have to give their paths, using the following options
-I/path_include_files specify the path of the include files
-L/path_lib_files -l<xxx> specify a library lib<xxx>.a in /path_lib_files
- If you want to debug your code you have to turn off optimisation and turn on run time checkings: these flags are described in the following section.
- If you want to compile your code for normal production you have to turn on optimization by choosing a higher optimization level
-O2 or -O3 Higher optimisation levels
Other flags are available for specific compilers and are reported later.
INTEL Compiler
Intel family compiler suite is recommended on GALILEO, since the architecture is based on Intel processors and therefore using the Intel compilers may result in a significant improvement in performance and stability of your code. Initialize the environment with the module command:
> module load intel
The names of the Intel compilers are:
- ifort: Fortran77 and Fortran90 compiler
- icc: C compiler
- icpc: C++ compiler
The documentation can be obtained with the man command after loading the relevant module:
> man ifort > man icc
Some miscellaneous flags are described in the following:
-extend_source Extend over the 77 column F77's limit -free / -fixed Free/Fixed form for Fortran -ip Enables interprocedural optimization for single-file compilation -ipo Enables interprocedural optimization between files - whole program optimisation
PORTLAND Group (PGI)
Initialize the environment with the module command:
> module load profile/advanced
> module load pgi
The name of the PGI compilers are:
- pgf77: Fortran77 compiler
- pgf90: Fortran90 compiler
- pgf95: Fortran95 compiler
- pghpf: High Performance Fortran compiler
- pgcc: C compiler
- pgCC: C++ compiler
The documentation can be obtained with the man command after loading the relevant module:
> man pgf95 > man pgcc
Some miscellaneous flags are described in the following:
-Mextend To extend over the 77 column F77's limit -Mfree / -Mfixed Free/Fixed form for Fortran -fast Chooses generally optimal flags for the target platform -fastsse Chooses generally optimal flags for a processor that supports SSE instructions
GNU compilers
The gnu compilers are always available but they are not the best optimizing compilers, especially for an Intel-based cluster like GALILEO. The default version is 4.8.5, you do not need to load the module for using it.
For a more recent version of the compiler, initialize the environment with the module command:
> module load gnu
The name of the GNU compilers are:
- g77: Fortran77 compiler
- gfortran: Fortran95 compiler
- gcc: C compiler
- g++: C++ compiler
The documentation can be obtained with the man command:
> man gfortan > man gcc
Some miscellaneous flags are described in the following:
-ffixed-line-length-132 To extend over the 77 column F77's limit -ffree-form / -ffixed-form Free/Fixed form for Fortran
...