Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

                                   last quarter 2023 (Data Centric)

start of production: August 2023 (Booster)

...

This system is the new pre-exascale Tier-0 EuroHPC supercomputer hosted by CINECA and currently built in the Bologna Technopole, Italy. It is supplied by ATOS, based on a BullSequana XH2135 supercomputer nodes, each with four NVIDIA Tensor Core GPUs and a single Intel CPU. It also uses NVIDIA Mellanox HDR 200Gb/s InfiniBand connectivity, with smart in-network computing acceleration engines that enable extremely low latency and high data throughput to provide the highest AI and HPC application performance and scalability.

...

Architecture: Atos BullSequana XH21355 "Da Vinci" blade - Booster - Atos BullSequana X2610 compute blade - Data-centric (will be available in the last quarter of the 2023)
Internal Network:
Nvidia Mellanox HDR DragonFly+ 200 Gb/s
Storage: 106 PB (raw) Large capacity storage, 620 GB/s
                   High Performance Storage 5.4 PB, 1.4 TB/s Based on 31 x DDN Exascaler ES400NVX2

Login nodes: in β production 1 (16 later): login14 accessible via IP 131.175.43.1304 nodes, icelake no-gpu




Booster

Data Centric

Model

Atos BullSequana XH21355 "Da Vinci" blade

Atos BullSequana X2610 compute blade

Racks

150

Nodes

3456

1536

Processors

32 cores Intel Ice Lake 

Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz

56 cores sockets Intel Sapphire Rapids

Accelerators

4 x NVIDIA Ampere GPUs/node, 64GB HBM2

-

Cores

32 cores/node

112 cores/node

RAM

512 (8x64) GB DDR4 3200 MHz

(16 x 32) GB DDR5 4800 MHz

Peak Performance

about 309 Pflop/s

9 Pflops/s

Internal Network

NVIDIA Mellanox HDR DragonFly++ 200Gb/s
2 x NVIDIA HDR 2×100 Gb/s cards
1x Nvidia HDR100 100 Gb/s card

Disk Space

106PB Large capacity storage
5.4 PB of High performance storage






The following guide refers already to the production configuration. The pre-production phase will begin in the next few days, with a mandatory access via 2FA. Please refer to the Access section below in the Leonardo User Guide.


Peak performance details

Node Performance

Theoretical
Peak
Performance

CPU (nominal/peak freq.)1680 Gflops
GPU75000 Gflops
Total76680 GFlops
Memory Bandwidth (nominal/peak freq.)24.4 GB/s

Access

IMPORTANT: Leonardo is still not in production. The hostname indicated below is disabled until official communication via HPC News.

All the All the login nodes have an identical environment and can be reached with SSH (Secure Shell) protocol using the "collective" hostname:

...

The mandatory access to Leonardo is the two-factor authentication (2FA). Please refer to this link of the User Guide to activate and connect via 2FA. For information about data transfer from other computers please follow the instructions and caveats on the dedicated section Data storage or the document  Data Management.

Accounting

The accounting is still unavailable in this pre-production phase and will soon be implemented.(consumed budget) is active from the start of the production phase. For accounting information please consult our dedicated section.

...

lrdall_serial
(default)

not yet available

boost_usr_prod

SLURM

partition

Job QOS# cores/# GPU
per job
max walltime

max running jobs per user/

max n. of cores/nodes/GPUs per user

prioritynotes


boost_

usr_prod


normalmax = 1 core, 1GPU32 nodes2404:00:004 cpus/1 GPU
40
boost_qos_installdbgmax = 16 cores04:00:00max = 16 cores 
1 job per user
40request to superc@cineca.itnormalmax = 32 nodes24:00:0040boost_qos_dbgmax = 2 2 nodes00:30:002 nodes / 64 cores / 8 GPUs80
boost_qos_bprod

min = 33 nodes

max =256 nodes *

24:00:00*256 nodes *nodes 60runs on 512 1536 nodes
min is 33 FULL nodes
boost_qos_lprod

max = 3 nodes

4-00:00:003 nodes /12 GPUs40

...


...