Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

As a result of such configuration, f or for each requested task a physical core with all its 8 threads will be allocated to the task.

...

The m100_all_serial partition is available with a maximum walltime of 4 hours, 6 tasks and 18000 MB per job. It runs on two dedicated nodes, and it is designed for pre/post-processing serial analysis, and for moving your data (via rsync, scp etc.) in case more than 10 minutes are required to complete the data transfer. This is the default partition, which is assumed by SLURM if you do not explicit request a partition with the flag "–partition" or "-p". You can however explicity explicitly request it in your batch script with the directive:

...

  • m100_fua_prod and m100_fua_dbg, are  reserved to EuroFusion users, respectively for production and debugging
  • m100_usr_prod and m100_usr_dbg are opened reserved to academic production.

Each node exposes itself to SLURM as having 32 cores, 4 GPUs and  230000 MB of XXXX memory.  SLURM SLURM assigns a node in shared way, assigning to the job only the resources required and allowing multiple jobs running to run on the same node/nodes. If you want to have the node/s in exclusive mode, ask for all the resources of the node (either ntasks-per-node=32 or mem=230000XXXX).

The maximum memory which can be requested is 230000MB (with a medium memory available per physical core of ~7 GB) and this value guarantees that no memory swapping will occur.

...