Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Each node exposes itself to SLURM as having 32 cores, 4 GPUs and XXXX 246000 MB memory. SLURM assigns a node in shared way, assigning to the job only the resources required and allowing multiple jobs to run on the same node/nodes. If you want to have the node/s in exclusive mode, ask for all the resources of the node (either ntasks-per-node=32 or mem=XXXX246000).

The maximum memory which can be requested is XXXXMB 246000 MB (average memory per physical core ~ 7GB) and this value guarantees that no memory swapping will occur. 

...

#!/bin/bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1 # this refers to the number of requested gpus per node, and can vary between 1 and 4
#SBATCH -A <account_name>
#SBATCH --mem=7100 # this refers to the requested memory per node with a maximum of XXXXXX246000
#SBATCH -p m100_usr_prod
#SBATCH --time 00:10:00 # format: HH:MM:SS
#SBATCH --job-name=my_batch_job
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<user_email>

...