...
last quarter 2023 (Data Centric)
start of production: August 2023 (Booster)
...
This system is the new pre-exascale Tier-0 EuroHPC supercomputer hosted by CINECA and currently built in the Bologna Technopole, Italy. It is supplied by ATOS, based on a BullSequana XH2135 supercomputer nodes, each with four NVIDIA Tensor Core GPUs and a single Intel CPU. It also uses NVIDIA Mellanox HDR 200Gb/s InfiniBand connectivity, with smart in-network computing acceleration engines that enable extremely low latency and high data throughput to provide the highest AI and HPC application performance and scalability.
...
Architecture: Atos BullSequana XH21355 "Da Vinci" blade - Booster - Atos BullSequana X2610 compute blade - Data-centric (will be available in the last quarter of the 2023)
Internal Network: Nvidia Mellanox HDR DragonFly+ 200 Gb/s
Storage: 106 PB (raw) Large capacity storage, 620 GB/s
High Performance Storage 5.4 PB, 1.4 TB/s Based on 31 x DDN Exascaler ES400NVX2
Login nodes: in β production 1 (16 later): login14 accessible via IP 131.175.43.1304 nodes, icelake no-gpu
|
---|
The following guide refers already to the production configuration. The pre-production phase will begin in the next few days, with a mandatory access via 2FA. Please refer to the Access section below in the Leonardo User Guide.
Peak performance details
Node Performance | ||
Theoretical | CPU (nominal/peak freq.) | 1680 Gflops |
GPU | 75000 Gflops | |
Total | 76680 GFlops | |
Memory Bandwidth (nominal/peak freq.) | 24.4 GB/s |
Access
IMPORTANT: Leonardo is still not in production. The hostname indicated below is disabled until official communication via HPC News.
All the All the login nodes have an identical environment and can be reached with SSH (Secure Shell) protocol using the "collective" hostname:
...
The mandatory access to Leonardo is the two-factor authentication (2FA). Please refer to this link of the User Guide to activate and connect via 2FA. For information about data transfer from other computers please follow the instructions and caveats on the dedicated section Data storage or the document Data Management.
Accounting
The accounting is still unavailable in this pre-production phase and will soon be implemented.(consumed budget) is active from the start of the production phase. For accounting information please consult our dedicated section.
...
SLURM partition | Job QOS | # cores/# GPU per job | max walltime | max running jobs per user/ max n. of cores/nodes/GPUs per user | priority | notes | |||||||||
boost_ | all_serialusr_prod | normal | max = 1 core, 1GPU32 nodes | 2404:00:00 | 4 cpus/1 GPU | 40 | |||||||||
boost_qos_installdbg | max = 16 cores | 04:00:00 | max = 16 cores 1 job per user | 40 | request to superc@cineca.it | boost_usr_prodnormal | max = 32 nodes | 24:00:00 | 40 | boost_qos_dbg | max = 2 2 nodes | 00:30:00 | 2 nodes / 64 cores / 8 GPUs | 80 | |
boost_qos_bprod | min = 33 nodes max =256 nodes * | 24:00:00* | 256 nodes *nodes | 60 | runs on 512 1536 nodes min is 33 FULL nodes | ||||||||||
boost_qos_lprod | max = 3 nodes | 4-00:00:00 | 3 nodes /12 GPUs | 40 |
...
- For EUROFusion users and their dedicated queues please refer to the dedicated document.
...