Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Each node exposes itself to SLURM as having 32 cores, 4 GPUs and xx  230 GB of memory. SLURM assigns a node in shared way, assigning to the job only the resources required and allowing multiple jobs running on the same node/nodes. If you want to have the node/s in exclusive mode, ask for all the resources of the node (hence, either ncpus=32 or ngpus=4 or all the memorymem=230000).


The maximum memory which can be requested is 182000MB  230000MB and this value guarantees that no memory swapping will occur.


medium memory available per core ~7.1 GB


For example, to request a single core node in a production queue the following SLURM job script can be used:

#!/bin/bash
#SBATCH -N 1
#SBATCH -A <account_name>
#SBATCH --mem=180000230000 <-- sostituire con memoria corrispondedte a 1 core
#SBATCH -p m100_usr_prod
#SBATCH --time 00:05:00
#SBATCH --job-name=my_batch_job
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<user_email>

...