Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Model: DUal-Socket Dell PowerEdge
Architecture: Linux Infiniband Cluster
Nodes: 553554 (+10 login nodes)
Processors:2xCPU x86 Intel Xeon Platinum 8276-8276L
(24c, 2.4Ghz)
Cores:48 cores/node
Accelerators: 2xGPU nVidia V100 PCIe3 with 32GB RAM on Viz nodes
RAM: 384GB (+ 3.0TB Optaine on 180 nodes)
Internal Network: Mellanox Infiniband 100GbE
Peak performance single node: 3.53 TFlop/s

...

System Architecture


Compute Nodes:

  • 528 554 computing nodes each 2 x CPU Intel CascadeLake 8260,  with 24 cores each, 2.4 GHz, 384GB RAM, subdivided in:
      •  348 340 standard nodes ("thin nodes") 480 GB SSD
      • 180  data processing nodes ("fat nodes") 2TB SSD, 3TB Intel Optain
      •  36   34 (visualization "viz" ) GPU nodes with 2x NVIDIA GPU V100 with 100Gbs Infiniband interconnection and 2TB SSD.
  • 77 computing server OpenStack for cloud computing, 2x CPU 8260 Intel CascadeLake, 24 cores, 2.4 GHz, 768 GB RAM, with 100Gbs Ethernet interconnection.
  • 20 PB of active storage accessible from both cloud and HPC nodes.
  • 5 PB of fast storage for HPC system.
  • 1 PB Ceph storage for Cloud (full NVMe/SSD)
  • 720 TB fast storage (IME DDN solution)

Login and Service nodes:

10 login nodes and 5 service nodes. All the nodes are interconnected through an Infiniband network, with OPA v10.6, capable of a maximum bandwidth of 100Gbit/s between each pair of nodes.

Accounting

For more information about accounting, please consult our dedicated section.

Budget Linearization policy

On GALILEO100 a linearization policy for the usage of project budgets has been defined and implemented. For each account, a monthly quota is defined as:

monthTotal = (total_budget / total_no_of_months)

Starting from the first day of each month, the collaborators of any account are allowed to use the quota at full priority. As long as the budget is consumed, the jobs submitted from the account will gradually lose priority, until the monthly budget (monthTotal) is fully consumed. At that moment, their jobs will still be considered for execution, but with a lower priority than the jobs from accounts that still have some monthly quota left.

This policy is similar to those already applied by other important HPC centers in Europe and worldwide. The goal is to improve the response time, giving users the opportunity of using the cpu hours assigned to their project in relation to their actual size (total amount of core-hours).


...

Production environment

Since GALILEO100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that access to the resources is as fair as possible.

Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section "Production Environment".

Interactive

serial program can be executed in the standard UNIX way:

> ./program

This is allowed only for very short runs, since the interactive environment set on the login nodes has a 10 minutes time limit: for longer runs please use the "batch" mode.

A parallel program can be executed interactively only by submitting an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.

For example, to start an interactive session with the MPI program "myprogram", using one node and two processors, you can launch the command:

> salloc -N 1 --ntasks-per-node=2 -A <account_name> 

SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you. If not specified, the default time limit for this kind of jobs is one hour.

When the shell returns a prompt inside the compute node, you can execute your program by typing:

> srun ./myprogram

(srun is recommended with respect to mpirun for this environment)


SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. with specific library paths or options), you can prepare the environment in the login shell and be sure to find it again in the interactive shell o the compute node.

On systems using SLURM, you can submit a script script.x using the command:

> sbatch script.x

You can get a list of defined partitions with the command:
> sinfo

For more information and examples of job scripts, see section Batch Scheduler SLURM.

Submitting serial Batch Jobs

The  partition will be  available in the full production.


Graphic sessionA

configuration of the RCM environment is in progress. This guide will be completed as soon as a final configuration will be implementedIf a graphic session is desired we recommend to use the tool "RCM". PLease install the latest version of RCM. See the corresponding paragraph to know more about how to download and use RCM.

Submitting parallel Batch Jobs

To run parallel batch jobs on GALILEO100 you need to specify the partition  and the qos that are described in this user guide.

If you do not specify the partition, your jobs will try to run on the default partition  g100_usr_prod.


The minimum number of cores you can request for a batch job is 1. The maximum number of cores that you can request is  (16 nodes). It is also possible to request a maximum walltime of 24 hours. Defaults are as follows:

  • If you do not specify the walltime (by means of the #SBATCH --time directive), a default value of 30 minutes will be assumed.

  • If you do not specify the number of cores (by means of the "SBATCH -n" directive) a default value of 1 core will be assumed.

  • If you do not specify the amount of memory (as the value of the "SBATCH --mem" DIRECTIVE), a default value of 7800 MBper core will be assumed.

The maximum memory per node is 375300MB (366.5GB) for thin and viz nodes, about 3TB for fat nodes.

Processor affinity:

Processor affinity, or CPU pinning, enables the binding of processes and threads to a CPU (or group of CPUs). It is crucial to ensure the correct affinity so to avoid the CPUs overallocation, with a significant reduction of performances. It becomes a critical matter when you ask for a full node but, for your specific reasons (memory needs etc.) you don't use all the cores.  

The following indications work when running your executables with srun, which is the recommended option against mpirun. We refer to a hybrid MPI/OpenMP case.

Given your optimal value of OMP_NUM_THREADS and number of processes, to obtain the full node ask for a number of task such that  ( --ntasks-per-node * --cpus-per-task )= 48.

  • To avoid the processes overallocation of cores rely on the --cpu-bind=cores option of srun  (you can skip it if you use all the requested cores)
  • To enforce the threads affinity use the Intel parameter KMP_AFFINITY, or the OpenMP parameter OMP_PLACES
  • To distribute the MPI tasks consecutively inside the sockets, use the -m block:block option of srun (or the equivalent sbatch directive #SBATCH -m block:block)
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=4
#SBATCH --account=<your_account>
module load autoload intelmpi/oneapi-2021–binary
export OMP_NUM_THREADS=4
export KMP_AFFINITY=compact    # or OMP_PLACES=cores
srun --cpu-bind=cores -m block:block <your_exe>

Use of GPUs on Galileo100

to be soon defined

Users with reserved resources

Users of projects that require reserved resources (such as industrial users or users associated to an agreement that involves dedicated resources) will be associated to a QOS qos_ind.

Using the  qos_ind (i.e. specifying the QOS in the submission script) , and specifying the partition g100_spc_prod, users associated to the allowed project will run their jobs on reserved nodes in the g100_spc_prod partition with the features and limits imposed for the particular account.

>#SBATCH --partition=g100_spc_prod
>#SBATCH --qos=qos_ind


Summary

In the following table, you can find all the main features and limits imposed on the SLURM partitions and QOS.

SLURM

partition

QOS

# cores per job

max walltime

max running jobs per user/

max n. of cpus/nodes per user

max memory per node

(MB)

priority

notes

g100_usr_interactive


noQOS2 nodes

8:00:00

/7800
on nodes with GPUs








g100_usr_prod

noQOS


g100_qos_dbg



g100_qos_bprod

min = 1

max =  32 nodes

min = 1

max = 96 (2 nodes)

min = 1537 (33 nodes)

max = (3072) 64 nodes

24:00:00


02:00:00


24:00:00


375300
(366.5 GB)




95


85

runs on thin and fat nodes

g100_spc_prod

Every account have a valid QOS

qos_ind to access this partition

Depending on the QOS used by the particular account

24:00:00

/

375300


Partition dedicated to specific kind of users.

Runs on thin nodes

g100_meteo_prod

qos_meteo


24:00:00
375300

Partition reserved to meteo services, NOT opened to production

Runs on thin nodes

 


_ fino a qui_

...