...
Node Performance | ||
Theoretical | CPU (nominal/peak freq.) | 1680 Gflops |
GPU | 75000 Gflops | |
Total | 76680 GFlops | |
Memory Bandwidth (nominal/peak freq.) | 24.4 GB/s |
Access
IMPORTANT: Leonardo is still not in production. The hostname indicated below is disabled until official communication via HPC News.
All the login nodes have an identical environment and can be reached with SSH (Secure Shell) protocol using the "collective" hostname:
...
You can add all options available for the backend compiler (you can show it by "-show" flag, e.g. "mpicc -show"). In order to list them type the "man" command
> man mpiifort
Running
To run MPI applications they are two way:
- using mpirun launcher
- using srun launcher
mpirun launcher
To use mpirun launcher the openmpi or intel-oneapi-mpi module needs to be loaded:
> module load openmpi/<version>
or
> module load intel-onepi-mpi/version
> mpirun ./mpi_exec
It can be used via salloc or sbatch way:
> salloc -N 2 (allocate a job of 2 nodes)
> mpirun ./mpi_exec
or
> sbatch -N 2 my_batch_script.sh (allocate a job of 2 nodes)
> cat my_batch_script.sh
#!/bin/sh
mpirun ./mpi_exec
srun launcher
MPI applications can be launched directly with the slurm launcher srun
> srun -N 2 ./mpi_exec
or via salloc/sbatch way:
> salloc -N 2 (allocate a job of 2 nodes)
> srun ./mpi_exec
or
> sbatch -N 2 my_batch_script.sh (allocate a job of 2 nodes)
> vi my_batch_script.sh
#!/bin/sh
srun -N 2 ./mpi_exec
...