Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In the following table you can find all the main features and limits imposed on the queues of the shared A1 and A2 partitions. For Marconi-FUSION dedicated queues please refer to the dedicated document.

Queue namePartition # cores per job
max walltime

max running jobs per user/

max n. of cpus per user

max memory per jobpriorityHBM/clustering modenotes
 A1debug

min = 1

max = 144

02:00:004/144

123 GB/node

value suggested: 118 GB/node

40 

managed by route

runs on 24 nodes shared with

the visualrcm queue

routeA1prod

min = 1

max = 2304

24:00:0020/2304

123 GB/node

value suggested: 118 GB/node

50 managed by route
 A1bigprod

min = 2305

max = 6000

24:00:00

1/6000

123 GB/node

value suggested: 118 GB/node

60 managed by route
specialA1special

min = 1

max = 36

180:00:00 

123 GB/node

value suggested: 118 GB/node

100 

ask superc@cineca.it

#PBS -q special

serialA1serial104:00:00

max 12 jobs on this queue

max 4 jobs per user

1 GB30 #PBS -q serial
visualrcmA1visualrcm

min = 1

max = 144

03:00:001/144

123 GB/node

value suggested: 118 GB/node

40 

runs on 24 nodes shared with

the debug queue

          
knlrouteA2knldebug

min = 1

max = 136 (2 nodes)

00:30:005/340

90 GB/node (mcdram=cache)

value suggested: 86 GB/node

40

mcdram=cache

numa=quadrant

managed by knlroute

runs on 144 dedicated nodes

 A2knlprod

min >136

max = 68000 (1000 nodes)

24:00:0020/68000

90 GB/node (mcdram=cache)

value suggested: 86 GB/node

50

mcdram=cache

numa=quadrant

managed by knlroute

 

knltestA2knltest

min =1

max = 952 (14 nodes)

24:00:00-

90 GB/node (mcdram=cache)

value suggested: 86 GB/node

105 GB/node (mcdram=flat)

value suggested: 101 GB/node

30

mcdram=<cache/flat>

numa=quadrant

ask superc@cineca.it

#PBS -q knltest

#PBS -W group_list=<account_no>

...