...
Leonardo is currently under testing by the vendor team, CINECA staff, and authorized external users. The environment is not finalized yet on storage, system configuration, and software stack.
The storage configuration is not finalized. PLEASE minimize your occupation on the filesystem.
Software environment
...
The available software environment is based on spack and modules, and needs to be activated. Some vendor installations are also available and presented in an lmod environment on the login node, but we warmly encourage the beta testers to use the spack environment to provide a valuable feedback on the software stack provided by CINECA.
Beta production environment
The production environment will be based on the slurm scheduler, already in place on the cluster but in a very preliminary configuration.
- The only available partition is "prod" (#SBATCH --partition=prod). Please refer to the general online guide to slurm and on task/thread bindings, and please pay attention to the setting of the SRUN_CPUS_PER_TASK for hybrid applications dispatched with "srun". In this preliminary configuration, please explicit the request of the correct pmix plugin when launching your parallel applications with "srun":
...
- srun
...
- --mpi=pmix_v3
...
- <options> <exe>No mpii settings are needed if you launch with "mpirun".
- The GPUs are not yet defined as G(eneral)res(ources) (Gres), and all the 4 GPUs of a node will be available in a job. Do not ask for gres=gpu:X (or analogous --gpus-per-node) in your script. Take the node in exclusive with the #SBATCH --exclusive directive
- The $SBATCH --exclusive directive is also recommended to avoid annoying drawbacks on the $TMPDIR of job