Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...



Booster

DCGP

Model

Atos BullSequana X2135 "Da Vinci" single-node GPU blade

Atos BullSequana X2140 three-node CPU blade

Racks

116
22

Nodes

3456

1536

Processors

single socket 32 cores Intel Ice Lake CPU

1 x Intel Xeon Platinum 8358, 2.60GHz  TDP 250W

dual socket 56 cores Intel Sapphire Rapids CPU

2 x Intel Xeon Platinum 8480p, 2.00 GHz TDP 350W

Accelerators

4 x NVIDIA Ampere GPUs/node, 64GB HBM2e NVLink 3.0 (200GB/s) 

-

Cores

32 cores/node

112 cores/node

RAM

512 (8x64) GB DDR4 3200 MHz

512 (16 x 32) GB DDR5 4800 MHz

Peak Performance

about 309 Pflop/s

9 Pflops/s


Internal Network

DragonFly+ 200 Gbps (NVIDIA Mellanox Infiniband HDR) 

2 x dual port HDR100 per node

 single port HDR100 per node

Storage
(raw capacity)

137.6 PB based on DDN ES7990X and Hard Drive Disks (Capacity Tier)
5.7 PB based on DDN ES400NVX2 and Solid State Drives (Fast Tier) 






...


Total Dimension (TB)

Quota (GB)

Notes

$HOME0.46 PiB50GB per user
  • permanent
  • backed up (suspended)
  • user specific
$CINECA_SCRATCH40 PiBno quota
  • HDD storage
  • temporary
  • user specific
  • no backup
  • automatic cleaning procedure of data older than 40 days (time interval can be reduced in case of critical usage ratio of the area. In this case, users will be notified via HPC-News).
$PUBLIC0.46 PiB50GB per user
  • permanent
  • user specific
  • no backup
$WORK30 PB

1TB per project

  • permanent
  • project specific
  • no backup
  • extensions can be considered if needed (mailto: superc@cineca.it)
$FAST3.5PB1TB per project
  • permanent
  • project specific
  • no backup

...

  • on the local SSD disks on login nodes (14 TB of capacity), mounted as /scratch_local (TMPDIR=/scratch_local). This is a shared area with no quota, remove all the files once they are not requested anymore. A cleaning procedure will be enforced in case of improper use of the area.   
  • on the local SSD disks on the serial node (lrd_all_serial, 14TB of capacity), managed via the slurm job_container/tmpfs plugin. This plugin provides a job-specific, private temporary file system space, with private instances of /tmp and /dev/shm in the job's user space (TMPDIR=/tmp, visible via the command "df -h"), removed at the end of the serial job. You can request the resource via sbatch directive or srun option "--gres=tmpfs:XX" (for instance: --gres=tmpfs:200GB200G), with a maximum of 1 TB for the serial jobs. If not explicitly requested, the /tmp has the default dimension of 10 GB.
  • on the local SSD disks on DCGP nodes (3 TB  of capacity). As for Like with the serial node, the local /tmp and /dev/shm areas are managed via plugin, which at the start of the jobs mounts private instances of /tmp and /dev/shm in the job's user space (TMPDIR=/tmp, visible via the command "df -h /tmp"), and unmounts them at the end of the job (all data will be lost). You can request the resource via sbatch directive or srun option "--gres=tmpfs:XX", with a maximum of all the available 3 TB for DCGP nodes. As for Like with the serial node, if not explicitly requested, the /tmp has the default dimension of 10 GB. Please note: for the DCGP jobs the requested amount of gres/tmpfs resource contributes to the consumed budget, changing the number of accounted equivalent core hours, see the dedicated section on the Accounting
  • on RAM on the diskless booster nodes (with a fixed size of 10 GB, no increase is allowed, and the gres/tmpfs resource is disabled).

...