...
Disks and Filesystems
The storage organisation conforms to the CINECA infrastructure (see Section "Data storage and Filesystems") . In addition to the home directory ($HOME), for each user is defined a scratch area $CINECA_SCRATCH, a large disk for storing run time data and files. $WORK is defined for each active project on the system, reserved for all the collaborators of the project. This is a safe storage area to keep run time data for the whole life of the project.
The filesystem organisation is based on LUSTREm LUSTRE an ioen open source parallel file system.
Total Dimension (TB) | Quota (GB) | Notes | |
---|---|---|---|
$HOME | 100T | 50 GB quota per user |
|
$CINECA_SCRATCH | 1PB | no quota |
|
$WORK | 2PB | 1TB quota per project |
|
It is also available a temporary storage local on compute nodes generated when the job starts and accessible via environment variable $TMPDIR. For more details please see the dedicated section of UG2.5: Data storage and FileSystems. On Galileo100 the $TMPDIR local area has ## 300 GB of available space.
$DRES points to the shared repository where Data RESources are maintained. This is a data archive area available only on-request, shared with all CINECA HPC systems and among different projects.
$DRES is not mounted on the compute nodes. This means that you can't access it within a batch job: all data needed during the batch execution has to be moved on $WORK or $CINECA_SCRATCH before the run starts.
Use the local command "cindata" to query for disk usage and quota ("cindata -h" for help):
> cindata
Modules environment
The software modules are collected in different profiles and organized by functional category (compilers, libraries, tools, applications,..).
On GALILEO100 the profiles are of two types, “domain” type (bioinf, chem-phys, lifesc,..) for the production activity and “programming” type (base and advanced) for compilation, debugging and profiling activities and that they can be loaded together.
"Base" profile is the default. It is automatically loaded after login and it contains basic modules for the programming activities (intel e gnu compilers, math libraries, profiling and debugging tools,..).
If you want to use a module placed under other profiles, for example an application module, you will have to load preventively the corresponding profile:
>module load profile/<profile name>
>module load autoload <module name>
For listing all profiles you have loaded you can use the following command:
>module list
In order to detect all profiles, categories and modules available on GALILEO100 the command “modmap” is available:
>modmap
With modmap you can see if the desired module is available and which profile you have to load to use it.
>modmap -m <module name>Spack
Spack environment - will be available soon
In case you don't find a software you are interested in, you can install it by yourself.
In this case, on GALILEO100 we also offer the possibility to use the “spack” environment by loading the corresponding module. Please refer to the dedicated section in UG2.6: Production Environment.
Production environment
Since GALILEO100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that access to the resources is as fair as possible.
Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section "Production Environment".
Interactive
A serial program can be executed in the standard UNIX way:
> ./program
This is allowed only for very short runs, since the interactive environment set on the login nodes has a 10 minutes time limit: for longer runs please use the "batch" mode.
A parallel program can be executed interactively only by submitting an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.
For example, to start an interactive session with the MPI program "myprogram", using one node and two processors, you can launch the command:
> salloc -N 1 --ntasks-per-node=2 -A <account_name> --pty /bin/bash
SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you. If not specified, the default time limit for this kind of jobs is one hour.
When the shell returns a prompt inside the compute node, you can execute your program by typing:
> srun ./myprogram
SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. with specific library paths or options), you can prepare the environment in the login shell and be sure to find it again in the interactive shell o the compute node.
As usual, on systems using SLURM, you can submit a script script.x using the command:
> sbatch script.x
You can get a list of defined partitions with the command:
> sinfo
For more information and examples of job scripts, see section Batch Scheduler SLURM.
Submitting serial Batch Jobs
The partition will be available in the full production.
Graphic session
A configuration of the RCM environment is in progress. This guide will be completed as soon as a final configuration will be implemented.
Submitting parallel Batch Jobs
To run parallel batch jobs on GALILEO100 you need to specify the partition and the qos that are described in this user guide.
If you do not specify the partition, your jobs will try to run on the default partition g100_usr_prod.
The minimum number of cores you can request for a batch job is 1. The maximum number of cores that you can request is (16 nodes). It is also possible to request a maximum walltime of 24 hours. Defaults are as follows:
If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.
If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of will be assumed.
If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 3000MB will be assumed.
The maximum memory per node is 118000MB
Use of GPUs on Galileo100
to be soon defined
Users with reserved resources
Users of projects that require reserved resources (such as industrial users or users associated to an agreement that involves dedicated resources) will be associated to a QOS qos_ind.
Using the qos_ind (i.e. specifying the QOS in the submission script) , and specifying the partition g100_spc_prod, users associated to the allowed project will run their jobs on reserved nodes in the g100_spc_prod partition with the features and limits imposed for the particular account.
>#SBATCH --partition=g100_spc_prod
>#SBATCH --qos=qos_ind
Summary
In the following table, you can find all the main features and limits imposed on the SLURM partitions and QOS.
SLURM partition | QOS | # cores per job | max walltime | max running jobs per user/ max n. of cpus/nodes per user | max memory per node (MB) | priority | notes |
g100_usr_interactive | noQOS | 2 nodes | 8:00:00 | / | 7800 | on nodes with GPUs | |
g100_usr_prod | noQOS g100_qos_dbg g100_qos_bprod | min = 1 max = 16 32 nodes min = 1 max = 19296 (2 nodes) min = 1632 nodes max = 64 nodes | 24:00:00 02:00:00 24:00:00 | 7800375300 | 95 85 | ||
g100_spc_prod | Every account have a valid QOS qos_ind to access this partition | Depending on the QOS used by the particular account | 24:00:00 | / | 7800375300 | Partition dedicated to specific kind of users. | |
g100_meteo_prod | qos_meteo | 24:00:00 | 7800375300 | Partition reserved to meteo services, NOT opened to production |
_ fino a qui_
...