You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »


hostname:              login.m100.cineca.it

early availability:  April 2020

start of production: to be defined (2020)



This system will be in production at the beginning of 2020 as an upgrade of the "non conventional" partition of the Marconi Tier-0 system. It is an accelerated cluster based on Power9 chips and Volta NVIDIA GPUs, acquired by Cineca within the PPI4HPC  European initiative.

System Architecture

Architecture: IBM Power 9 AC922
Internal Network: 
Mellanox Infiniband EDR DragonFly+
Storage: 8 PB (raw) GPFS of local storage

Login nodes: 8 Login IBM Power9 LC922 (similar to the compute nodes)


Model: IBM Power AC922 (Whiterspoon)

Racks: 55 total (49 compute)
Nodes: 980
Processors: 2x16 cores IBM POWER9 AC922 at 3.1 GHz
Accelerators: 4 x NVIDIA Volta V100 GPUs, Nvlink 2.0, 16GB
Cores: 32 cores/node
RAM: 256 GB/node
Peak Performance: about 32 Pflop/s
Internal Network: Mellanox Infiniband EDR DragonFly+
Disk Space: 8PB Gpfs storage


Access

All the login nodes have an identical environment and can be reached with SSH (Secure Shell) protocol using the "collective" hostname:

> login.m100.cineca.it

which establishes a connection to one of the available login nodes.  

For information about data transfer from other computers please follow the instructions and caveats on the dedicated section Data storage, or the document  Data Management.

Accounting

For accounting information please consult our dedicated section.

The account_no (or project) is important for batch executions. You need to indicate an account_no to be accounted for in the scheduler, using the flag "-A"

#SBATCH -A <account_no>

Please remember that different projects are usually active on different hosts. With the "saldo -b" command you can list all the account_no associated to your username. 

saldo -b (reports projects defined on M100 )


Budget Linearization policy

On M100 a linearization policy for the usage of project budgets has been defined and implemented. For each account, a monthly quota is defined as:

monthTotal = (total_budget / total_no_of_months)

Starting from the first day of each month, the collaborators of any account are allowed to use the quota at full priority. As long as the budget is consumed, the jobs submitted from the account will gradually loose priority, until the monthly budget (monthTotal) is fully consumed. At that moment, their jobs will still be considered for execution, but with a lower priority than the jobs from accounts that still have some monthly quota left.

This policy is similar to those already applied by other important HPC centers in Europe and worldwide. The goal is to improve the response time, giving users the opportunity of using the cpu hours assigned to their project in relation of their actual size (total amount of core-hours).


Disks and Filesystems

The storage organization conforms to the CINECA infrastructure (see Section Data Storage and Filesystems). 

In addition to the home directory $HOME, for each user is defined a scratch area $CINECA_SCRATCH, a large disk for the storage of run time data and files. 

$WORK area is defined for each active project on the system, reserved for all the collaborators of the project. This is a safe storage area to keep run time data for the whole life of the project.



Total Dimension (TB)

Quota (GB)

Notes

$HOME20050
  • permanent/backed up, user specific, local
$CINECA_SCRATCH2.500no quota
  • temporary, user specific, local
  • no backup
  • automatic cleaning procedure of data older than 40 days (time interval can be reduced in case of critical usage ratio of the area. In this case, users will be notified via HPC-News)
$WORK7.1001.000
  • permanent, project specific, local
  • no backup
  • extensions can be considered if needed (mailto: superc@cineca.it)


$DRES environment variable points to the shared repository where Data RESources are maintained. This is a data archive area available only on-request, shared with all CINECA HPC systems and among different projects. $DRES is not mounted on the compute nodes. This means that you cannot access it within a batch job: all data needed during the batch execution has to be moved to $WORK or $CINECA_SCRATCH before the run starts.

Since all the filesystems are based on IBM Spectrum Scale™ file system (formerly GPFS), the usual unix command "quota" is not working. Use the local command cindata to query for disk usage and quota ("cindata -h" for help):

> cindata


Modules environment

As usual, the software modules are collected in different profiles and organized by functional category (compilers, libraries, tools, applications,..).

The profiles are of two types,  “domain” type (chem, phys, lifesc,..) for the production activity and “programming” type (base and advanced)  for compilation, debugging and profiling activities. They can be loaded together.

The "Base" profile is the default one. It is automatically loaded after login and it contains basic modules for the programming activities (xls, pgi and gnu compilers, math libraries, profiling and debugging tools,..).

If you want to use a module placed under others profiles, for example an application module, you will have to load the corresponding profile:

>module load profile/<profile name>
>module load autoload <module name>

For listing all profiles you have loaded use the following command:

>module list

In order to detect all profiles, categories and modules available on M100 the command “modmap” is available:

>modmap


Spack

...

Production environment

Since M100 is a general purpose system and it is used by several users at the same time, long production jobs must be submitted using a queuing system. This guarantees that the access to the resources is as fair as possible.
Roughly speaking, there are two different modes to use an HPC system: Interactive and Batch. For a general discussion see the section Production Environment and Tools.


Interactive

A serial program can be executed in the standard UNIX way:

> ./program

This is allowed only for very short runs, since the interactive environment has a 10 minutes time limit: for longer runs please use the "batch" mode.

A parallel program can be executed interactively only within an "Interactive" SLURM batch job, using the "srun" command: the job is queued and scheduled as any other job, but when executed, the standard input, output, and error streams are connected to the terminal session from which srun was launched.

For example, to start an interactive session with the MPI program myprogram, using one node, two processors, launch the command:

> srun -N1 -n2 --ntasks-per-node=2 -A <account_name> --pty /bin/bash

add gpu version?

SLURM will then schedule your job to start, and your shell will be unresponsive until free resources are allocated for you.

When the shell come back with the prompt, you can execute your program by typing:

> srun ./myprogram

or

> mpirun ./myprogram

The srun command will take by default PMI2 as MPI type.

Please note that

1) The recommended way to launch parallel tasks in slurm jobs is with srun. By using srun vs mpirun you will get full support for process tracking, accounting, task affinity, suspend/resume and other features.

2) Controlling the processes and threads affinity is crucial to ensure the optimal performances on M100. Do not rely on slurm autoaffinity and use the proper SLURM --cpu-bind option 

SLURM automatically exports the environment variables you defined in the source shell, so that if you need to run your program "myprogram" in a controlled environment (i.e. specific library paths or options), you can prepare the environment in the origin shell being sure to find it in the interactive shell.

Batch

The info reported here refer to the general user M100 partition. The production environment for EUROfusion users is discussed in a separate document.

As usual on systems using SLURM, you can submit a script script.x using the command:

> sbatch script.x

You can get a list of defined partitions with the command:

> sinfo

You can simplify the output reported by the sinfo command specifying the output format via the "-o" option. A minimal output is reported, for instance, with:

> sinfo -o "%10D %20F %P"

which shows, for each partition, the total number of nodes and the number of nodes by state in the format "Allocated/Idle/Other/Total".

IMPORTANT:

  1. Please note that the recommended way to launch parallel tasks in slurm jobs is with srun. By using srun vs mpirun you will get full support for process tracking, accounting, task affinity, suspend/resume and other features.
  2. Controlling the processes and threads affinity is crucial to ensure the optimal performances on Marconi-A2 and Marconi-A3. Do not rely on slurm autoaffinity and use the proper SLURM --cpu-bind option. 

For more information and examples of job scripts, see section Batch Scheduler SLURM.


Submitting serial Batch jobs


The m100_all_serial partition is available with a maximum walltime of 4 hours, 6 tasks and 18000 MB per job. It runs on two dedicated nodes, and it is designed for pre/post-processing serial analysis, and for moving your data (via rsync, scp etc.) in case more than 10 minutes are required to complete the data transfer. In order to use this partition you have to specify the SLURM flag "-p":


#SBATCH -p m100_all_serial


Submitting Batch jobs for production



sinfo -d lists all the partitions available on M100. Some of them are reserved to dedicated class of users (for example *_fua_ * partitions are for EUROfusion users):


  • m100_fua_prod and m100_fua_dbg, are  reserved to EuroFusion users, respectivelly for production and debugging
  • m100_usr_prod and m100_usr_dbg are opened to academic production.


Each node exposes itself to SLURM as having 32 cores, 4 GPUs and xx GB of memory. SLURM assigns a node in shared way, assigning to the job only the resources required and allowing multiple jobs running on the same node/nodes. If you want to have the node/s in exclusive mode, ask for all the resources of the node (hence, ncpus=32 or ngpus=4 or all the memory).


The maximum memory which can be requested is 182000MB and this value guarantees that no memory swapping will occur.


For example, to request a single node in a production queue the following SLURM job script can be used:


#!/bin/bash
#SBATCH -N 1
#SBATCH -A <account_name>
#SBATCH --mem=180000 <-- sostituire con memoria corrispondedte a 1 core
#SBATCH -p m100_usr_prod
#SBATCH --time 00:05:00
#SBATCH --job-name=my_batch_job
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<user_email>


srun ./myexecutable


Users with exhausted but still active projects are allowed to keep using the cluster resources, even if at a very low priority, by adding the  "qos_lowprio" flag to their job;

(inserire la richiesta di QOS)


Summary


In the following table you can find all the main features and limits imposed on the queues/Partitions of M100. 



SLURM

partition

QOS# cores per job
max walltime

max running jobs per user/

max n. of cpus/nodes per user

max memory per node

(MB)

prioritynotes

m100_all_serial

(default partition)

noQOS

max = 6

(max mem= 18000 MB)

04:00:00

6 cpus


 700040

 qos_rcm

min = 1

max = 48

03:00:001/48

182000


- 

to be defined









m100_usr_dbgno QOS

min = 1 node

max = 4 nodes

00:30:004/418200040runs on 24 dedicated nodes
m100_usr_prodno QOS

min = 1 node

max = 16 nodes

1-00:00:0064 nodes18200040

skl_qos_bprod

min=65 nodes

max = 256 nodes

24:00:00

1/256

1 jobs per account

18200085

#SBATCH -p skl_usr_prod

#SBATCH --qos=skl_qos_bprod


qos_special>256 nodes

>24:00:00

(max = 64 nodes for user)


                          182000        40

#SBATCH --qos=qos_special

request to superc@cineca.it

qos_lowpriomax = 64 nodes24:00:0064 nodes1820000#SBATCH --qos=qos_lowprio
m100_usr_preempt
max = 16 nodes08:00:00

10




Graphic session


If a graphic session is desired we recommend to use the tool RCM (Remote Connection Manager)For additional information visit Remote Visualization section on our User Guide.


  • No labels