In this page:
hostname: login@g100.cineca.it
login01-ext.g100.cineca.it
login02-ext.g100.cineca.it
login03-ext.g100.cineca.it
early availability: 20/07/2021
start of pre-production:
start of production:
Model: DUal-Socket Dell PowerEdge Architecture: Linux Infiniband Cluster Cores:48 cores/node |
---|
Starting from March 2021 Galileo was turned off to make space for the new more performant structure Galileo100. The new Infrastructure is co-funded by the European ICEI (Interactive Computing e-Infrastructure) project, it is a system engineered by DELL.
Compute Nodes:
Login and Service nodes:
10 login nodes and 5 service nodes. All the nodes are interconnected through an Infiniband network, with OPA v10.6, capable of a maximum bandwidth of 100Gbit/s between each pair of nodes.
For more information about accounting, please consult our dedicated section.
On GALILEO100 a linearization policy for the usage of project budgets has been defined and implemented. For each account, a monthly quota is defined as:
monthTotal = (total_budget / total_no_of_months)
Starting from the first day of each month, the collaborators of any account are allowed to use the quota at full priority. As long as the budget is consumed, the jobs submitted from the account will gradually lose priority, until the monthly budget (monthTotal) is fully consumed. At that moment, their jobs will still be considered for execution, but with a lower priority than the jobs from accounts that still have some monthly quota left.
This policy is similar to those already applied by other important HPC centers in Europe and worldwide. The goal is to improve the response time, giving users the opportunity of using the cpu hours assigned to their project in relation to their actual size (total amount of core-hours).
The storage organisation conforms to the CINECA infrastructure (see Section "Data storage and Filesystems") . In addition to the home directory ($HOME), for each user is defined a scratch area $CINECA_SCRATCH, a large disk for storing run time data and files. $WORK is defined for each active project on the system, reserved for all the collaborators of the project. This is a safe storage area to keep run time data for the whole life of the project.
The filesystem organisation is based on LUSTRE an open source parallel file system.
Total Dimension (TB) | Quota (GB) | Notes | |
---|---|---|---|
$HOME | 100T | 50 GB quota per user |
|
$CINECA_SCRATCH | 1PB | no quota |
|
$WORK | 2PB | 1TB quota per project |
|
It is also available a temporary storage local on compute nodes generated when the job starts and accessible via environment variable $TMPDIR. For more details please see the dedicated section of UG2.5: Data storage and FileSystems. On Galileo100 the $TMPDIR local area has 293 GB of available space.
$DRES points to the shared repository where Data RESources are maintained. This is a data archive area available only on-request, shared with all CINECA HPC systems and among different projects.
$DRES is not mounted on the compute nodes. This means that you can't access it within a batch job: all data needed during the batch execution has to be moved on $WORK or $CINECA_SCRATCH before the run starts.
Use the local command "cindata" to query for disk usage and quota ("cindata -h" for help):
> cindata
The software modules are collected in different profiles and organized by functional category (compilers, libraries, tools, applications,..).
On GALILEO100 the profiles are of two types, “domain” type (bioinf, chem-phys, lifesc,..) for the production activity and “programming” type (base and advanced) for compilation, debugging and profiling activities and that they can be loaded together.
"Base" profile is the default. It is automatically loaded after login and it contains basic modules for the programming activities (intel e gnu compilers, math libraries, profiling and debugging tools,..).
If you want to use a module placed under other profiles, for example an application module, you will have to load preventively the corresponding profile:
>module load profile/<profile name>
>module load autoload <module name>
For listing all profiles you have loaded you can use the following command:
>module list
In order to detect all profiles, categories and modules available on GALILEO100 the command “modmap” is available:
>modmap
With modmap you can see if the desired module is available and which profile you have to load to use it.
>modmap -m <module name>
In case you don't find a software you are interested in, you can install it by yourself.
In this case, on GALILEO100 we also offer the possibility to use the “spack” environment by loading the corresponding module. Please refer to the dedicated section in UG2.6: Production Environment.
Since GALILEO100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that access to the resources is as fair as possible.
Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section "Production Environment".
A serial program can be executed in the standard UNIX way:
> ./program
This is allowed only for very short runs, since the interactive environment set on the login nodes has a 10 minutes time limit: for longer runs please use the "batch" mode.
A parallel program can be executed interactively only by submitting an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.
For example, to start an interactive session with the MPI program "myprogram", using one node and two processors, you can launch the command:
> salloc -N 1 --ntasks-per-node=2 -A <account_name>
SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you. If not specified, the default time limit for this kind of jobs is one hour.
When the shell returns a prompt inside the compute node, you can execute your program by typing:
> srun ./myprogram
(srun is recommended with respect to mpirun for this environment)
SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. with specific library paths or options), you can prepare the environment in the login shell and be sure to find it again in the interactive shell o the compute node.
On systems using SLURM, you can submit a script script.x using the command:
> sbatch script.x
You can get a list of defined partitions with the command:
> sinfo
For more information and examples of job scripts, see section Batch Scheduler SLURM.
The partition will be available in the full production.
Graphic session
If a graphic session is desired we recommend to use the tool "RCM". PLease install the latest version of RCM. See the corresponding paragraph to know more about how to download and use RCM.
To run parallel batch jobs on GALILEO100 you need to specify the partition and the qos that are described in this user guide.
If you do not specify the partition, your jobs will try to run on the default partition g100_usr_prod.
The minimum number of cores you can request for a batch job is 1. The maximum number of cores that you can request is (16 nodes). It is also possible to request a maximum walltime of 24 hours. Defaults are as follows:
If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.
If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of 1 core will be assumed.
If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 7800 MBper core will be assumed.
The maximum memory per node is 375300MB (366.5GB) for thin and viz nodes, about 3TB for fat nodes.
Processor affinity, or CPU pinning, enables the binding of processes and threads to a CPU (or group of CPUs). It is crucial to ensure the correct affinity so to avoid the CPUs overallocation, with a significant reduction of performances. It becomes a critical matter when you ask for a full node but, for your specific reasons (memory needs etc.) you don't use all the cores.
The following indications work when running your executables with srun, which is the recommended option against mpirun. We refer to a hybrid MPI/OpenMP case.
Given your optimal value of OMP_NUM_THREADS and number of processes, to obtain the full node ask for a number of task such that ( --ntasks-per-node * --cpus-per-task )= 48.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=4
#SBATCH --account=<your_account>
module load autoload intelmpi/oneapi-2021–binary
export OMP_NUM_THREADS=4
export KMP_AFFINITY=compact # or OMP_PLACES=cores
srun --cpu-bind=cores -m block:block <your_exe>
to be soon defined
Users of projects that require reserved resources (such as industrial users or users associated to an agreement that involves dedicated resources) will be associated to a QOS qos_ind.
Using the qos_ind (i.e. specifying the QOS in the submission script) , and specifying the partition g100_spc_prod, users associated to the allowed project will run their jobs on reserved nodes in the g100_spc_prod partition with the features and limits imposed for the particular account.
>#SBATCH --partition=g100_spc_prod
>#SBATCH --qos=qos_ind
In the following table, you can find all the main features and limits imposed on the SLURM partitions and QOS.
SLURM partition | QOS | # cores per job | max walltime | max running jobs per user/ max n. of cpus/nodes per user | max memory per node (MB) | priority | notes |
g100_usr_interactive | noQOS | 2 nodes | 8:00:00 | / | 7800 | 40 | on nodes with GPUs |
g100_usr_prod | noQOS g100_qos_dbg g100_qos_bprod | min = 1 max = 32 nodes min = 1 max = 96 (2 nodes) min = 1537 (33 nodes) max = (3072) 64 nodes | 24:00:00 02:00:00 24:00:00 | 375300 | 80 60 | runs on thin and fat nodes | |
g100_spc_prod | Every account have a valid QOS qos_ind to access this partition | Depending on the QOS used by the particular account | 24:00:00 | / | 375300 | n/a | Partition dedicated to specific kinds of users. Runs on thin nodes |
g100_meteo_prod | qos_meteo | 24:00:00 | 375300 | 40 | Partition reserved to meteo services, NOT opened to production Runs on thin nodes |
_ fino a qui_
The native, and recommended, compilers on Galileo100 are the Intel ones, since the architecture is based on Intel processors and therefore using the Intel compilers may result in a significant improvement in performance and stability of your code. On the cluster is installed the new suite Intel OneAPI. Initialize the environment with the module command:
> module load intel/oneapi-2021–binary
> module list
Currently Loaded Modulefiles:
intel/oneapi-2021–binary
The suite contains the new Intel oneAPI nextgen compilers (icx, icpx, ifx) and the classic compilers (icc, icpc, ifort, ...):
Classic | oneAPI | Notes | |
---|---|---|---|
C compilers | icc | icx | icx is the Intel nextgen compiler based on Clang/LLVM technology plus Intel proprietary optimizations and code generation, and it enables OpenMP TARGET offload to Intel GPU targets (irrelevant on Galileo100) |
C++ compilers | icpc | icpx | |
Fortran compilers | ifort | ifx |
ICX is Intel nextgen compiler based on Clang /LLVM technology plus Intel proprietary optimizations and code generation.
NOTE:
The documentation can be obtained with the man command after loading the relevant module:
> man ifort > man icx
Some miscellaneous flags are described in the following:
-extend_source Extend over the 77 column F77's limit -free / -fixed Free/Fixed form for Fortran -ip Enables additional interprocedural optimization for single-file compilation -ipo Enables interprocedural optimization between files - whole program optimisation
NOTE for the migration from Galileo to Galileo100: In principle, binaries generated on Galileo should work, but we strongly recommend you to reinstall all your software applications since on Galileo100 there is a different Operating System (Centos 8.3).
PORTLAND Group (PGI)
Initialize the environment with the module command:
> module load profile/advanced
> module load pgi
The name of the PGI compilers are:
The documentation can be obtained with the man command after loading the relevant module:
> man pgf95 > man pgcc
Some miscellaneous flags are described in the following:
-Mextend To extend over the 77 column F77's limit -Mfree / -Mfixed Free/Fixed form for Fortran -fast Chooses generally optimal flags for the target platform -fastsse Chooses generally optimal flags for a processor that supports SSE instructions
GNU compilers
The gnu compilers are always available but they are not the best optimizing compilers, especially for an IntelOneAPI-based cluster like GALILEO100. The default version is 10.2.0.
For a more recent version of the compiler, initialize the environment with the module command:
> module load gnu
The name of the GNU compilers are:
The documentation can be obtained with the man command:
> man gfortan > man gcc
Some miscellaneous flags are described in the following:
-ffixed-line-length-132 To extend over the 77 column F77's limit -ffree-form / -ffixed-form Free/Fixed form for Fortran
If your code aborts at runtime, there may be a problem with it. In order to solve it, you can decide to analyze the core file (feature not available if the code is compiled with PGI) or to run your code using a debugger.
In both cases, you need to enable compiler runtime checks, by putting specific flags during the compilation phase. In the following we describe those flags for the different Fortran compilers: if you are using the C or C++ compiler, please keep in mind that the flags may differ.
The following flags are generally available for all compilers and are mandatory for an easier debugging session:
-O0 Lower level of optimisation -g Produce debugging information
Other flags are compiler-specific and are described in the following.
The following flags are useful (in addition to "-O0 -g") for debugging your code:
-traceback generate extra information to provide source file traceback at run time -fp-stack-check generate extra code to ensure that the floating-point stack is in the expected state -check bounds enables checking for array subscript expressions -fpe0 allows some control over floating-point exception handling at run-time
The following flags are useful (in addition to "-O0 -g") for debugging your code:
-C Add array bounds checking -Ktrap=ovf,divz,inv Controls the behavior of the processor when exceptions occur: FP overflow, divide by zero, invalid operands
The following flags are useful (in addition to "-O0 -g") for debugging your code:
-Wall Enables warnings pertaining to usage that should be avoided -fbounds-check Checks for array subscripts.
PGI: pgdbg (serial/parallel debugger)
pgdbg is the Portland Group Inc. symbolic source-level debugger for F77, F90, C, C++ and assembly language programs. It is capable of debugging applications that exhibit various levels of parallelism, including:
There are two forms of the command used to invoke pgdbg. The first is used when debugging non-MPI applications, the second form, using mpirun, is used when debugging MPI applications:
> pgdbg [options] ./myexec [args] > mpirun [options] -dbg=pgdbg ./myexec [args]
More details in the on line documentation, using the "man pgdbg" command after loading the module.
To use this debugger, you should compile your code with one of the pgi compilers and the debugging command-line options described above, then you run your executable inside the "pgdbg" environment:
> module load profile/advanced
> module load pgi > pgf90 -O0 -g -C -Ktrap=ovf,divz,inv -o myexec myprog.f90 > pgdbg ./myexec
By default, pgdbg presents a graphical user interface (GUI). A command-line interface is also provided through the "-text" option.
GNU: gdb (serial debugger)
GDB is the GNU Project debugger and allows you to see what is going on 'inside' your program while it executes -- or what the program was doing at the moment it crashed.
GDB can do four main kinds of things (plus other things in support of these) to help you catch bugs in the act:
More details in the online documentation, using the "man gdb" command.
To use this debugger, you should compile your code with one of the gnu compilers and the debugging command-line options described above, then you run your executable inside the "gdb" environment:
> module load gnu
> gfortran -O0 -g -Wall -fbounds-check -o myexec myprog.f90
> gdb ./myexec
In order to understand what problem was affecting your code, you can also try a "Core file" analysis. Since core files are usually quite large, be sure to work in the /scratch area.
There are several steps to follow:
> ulimit -c unlimited (bash) > limit coredumpsize unlimited (csh/tcsh)
> export decfort_dump_flag=TRUE (bash) > setenv decfort_dump_flag TRUE (csh/tcsh)
INTEL compilers
> module load intel > ifort -O0 -g -traceback -fp-stack-check -check bounds -fpe0 -o myexec prog.f90 > ulimit -c unlimited > export decfort_dump_flag=TRUE > ./myexec > ls -lrt -rwxr-xr-x 1 aer0 cineca-staff 9652 Apr 6 14:34 myexec -rw------- 1 aer0 cineca-staff 319488 Apr 6 14:35 core.25629 > idbc ./myexec core.25629
PGI compilers
> module load profile/advenced
> module load pgi > pgf90 -O0 -g -C -Ktrap=ovf,divz,inv -o myexec myprog.f90 > ulimit -c unlimited > ./myexec > ls -lrt -rwxr-xr-x 1 aer0 cineca-staff 9652 Apr 6 14:34 myexec -rw------- 1 aer0 cineca-staff 319488 Apr 6 14:35 core.25666 > pgdbg -text -core core.25666 ./myexec
GNU Compilers
> module load gnu
> gfortran -O0 -g -Wall -fbounds-check -o myexec prog.f90 > ulimit -c unlimited > ./myexec > ls -lrt -rwxr-xr-x 1 aer0 cineca-staff 9652 Apr 6 14:34 myexec -rw------- 1 aer0 cineca-staff 319488 Apr 6 14:35 core.25555 > gdb ./myexec core.2555
VALGRIND
Valgrind is a framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. The Valgrind distribution currently includes six production-quality tools: a memory error detector, two thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache profiler, and a heap profiler.
Valgrind is Open Source / Free Software, and is freely available under the GNU General Public License, version 2.
To analyse a serial application:
If you normally run your program like this:
myprog arg1 arg2
Use this command line:
valgrind (valgrind-options) myprog arg1 arg2
Memcheck is the default tool. You can add the --leak-ceck option that turns on the detailed memory leak detector. Your program will run much slower than normal, and use a lot more memory. Memcheck will issue messages about memory errors and leaks that it detects.
mpirun -np 4 valgrind (valgrind-options) myprog arg1 arg2
Totalview
Totalview is a parallel debugger with a practical GUI that assist users to debug their parallel code. It has functionalities like stopping and reprising a code mid-run, setting breakpoints, checking the value of variables anytime, browse between the different tasks and threads to see the different behaviours, memory check functions and so on. For information about how to run the debugger (by connecting the compute nodes to your display via RCM), type the command:
> module help totalview
Scalasca
Scalasca is a tool for profiling parallel scientific and engineering applications that make use of MPI and OpenMP.
Details how to use scalasca in
http://www.scalasca.org/software/scalasca-2.x/documentation.html
In software engineering, profiling is the investigation of a program's behaviour using information gathered as the program executes. The usual purpose of this analysis is to determine which sections of a program to optimize - to increase its overall speed, decrease its memory requirement or sometimes both.
A (code) profiler is a performance analysis tool that, most commonly, measures only the frequency and duration of function calls, but there are other specific types of profilers (e.g. memory profilers) in addition to more comprehensive profilers, capable of gathering extensive performance data.
gprof
The GNU profiler gprof is a useful tool for measuring the performance of a program. It records the number of calls to each function and the amount of time spent there, on a per-function basis. Functions which consume a large fraction of the run-time can be identified easily from the output of gprof. Efforts to speed up a program should concentrate first on those functions which dominate the total run-time.
gprof uses data collected by the -pg compiler flag to construct a text display of the functions within your application (call tree and CPU time spent in every subroutine). It also provides quick access to the profiled data, which let you identify the functions that are the most CPU-intensive. The text display also lets you manipulate the display in order to focus on the application's critical areas.
Usage:
> gfortran -pg -O3 -o myexec myprog.f90 > ./myexec > ls -ltr ....... -rw-r--r-- 1 aer0 cineca-staff 506 Apr 6 15:33 gmon.out > gprof myexec gmon.out
It is also possible to profile at code line-level (see "man gprof" for other options). In this case you must use also the “-g” flag at compilation time:
> gfortran -pg -g -O3 -o myexec myprog.f90 > ./myexec > ls -ltr ....... -rw-r--r-- 1 aer0 cineca-staff 506 Apr 6 15:33 gmon.out > gprof -annotated-source myexec gmon.out
It is possible to profile MPI programs. In this case, the environment variable GMON_OUT_PREFIX must be defined in order to allow to each task to write a different statistical file. Setting
export GMON_OUT_PREFIX=<name>
once the run is finished each task will create a file with its process ID (PID) extension
<name>.$PID
If the environmental variable is not set every task will write the same gmon.out file.
MKL
The Intel Math Kernel Library (Intel MKL) enables improving performance of scientific, engineering, and financial software that solves large computational problems. Intel MKL provides a set of linear algebra routines, fast Fourier transforms, as well as vectorized math and random number generation functions, all optimized for the latest Intel processors, including processors with multiple cores.
Intel MKL is thread-safe and extensively threaded using the OpenMP technology.
documentation can be found by loading the mkl module and searching in the directory:
${MKLROOT}/../Documentation/en_US/mkl
To use the MKL in your code you to load the module, then to define includes and libraries at compile and linking time:
> module load mkl > icc -I$MKL_INC -L$MKL_LIB -lmkl_intel_lp64 -lmkl_core -lmkl_sequential
For more informations please refer to the documentation.
The parallel programming on Galileo is based on IntelMPI and OpenMPI versions of MPI. The libraries and special wrappers to compile and link the personal programs are contained in several modules, one for each supported suite of compilers.
These command names refer to wrappers around the actual compilers, they behave differently depending on the module you have loaded.
> mpiifort -o myexec myprof.f90 (uses the ifort compiler)
For more option of the compiler, please see
> man mpiifort
The three main parallel-MPI commands for compilation with OpenMPI are:
> mpif90 -o myexec myprof.f90 (uses the gfortran compiler)
The four main parallel-MPI commands for compilation with OpenMPI are:
In all cases the parallel applications have to be executed with the recommended command:
> srun ./myexec
There are limitations to running parallel programs in the login shell. You should use the "Interactive SLURM" mode, as described in the "Interactive" section, previously on this page.