Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

--cpus-per-task=floor (n° physical cpus per node/ n° tasks per node) * n° hyper threads

...

2) On "hyperthreading nodes" if you require for a single node more tasks of available physical cpus  you have to specify the following SLURM option:

--cpu-bind=threads

in order to ensure the binding between mpi tasks and logical cpus and to avoid the overload of physical cpus.


Alternatively, you can request more cpus for the single task until you use all logical cpus of the node :

--cpus-per-task=<n° logical cpus per node / n° tasks per node>

...

On nodes without hyperthreading the cpu concept coincides with physical cpu (core) and consequently the n° of cpus for single task (--cpus-per-task) can be up the maximum number of physical cpus of the node. For example, on BDW and  SKL nodes the n° of cpus for single task can be up to 36/ 48 cpus respectively.


On  hyperthreading nodes, it coincides with logical cpu (thread) and consequently, the n° of cpus for a single task can be up to the maximum number of logical cpus of the node.

In order to define if the OpenMP threads have to bind physical ( core) or logical cpus (thread) you can use the following variable:

export OMP_PLACES= <cores|threads>

...

export OMP_PLACES= <cores>

...

export OMP_PLACES= <threads>

...

You can find, at the following web page, some MPI and MPI/OpenMP jobs scripts examplest:

UG2.56.1: Batch Scheduler SLURM