Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this page:

Table of Contents

...

Our HPC resources can be used on a "pay for use" basis.

Currently, the cost is based on elapsed time and the number of cores reserved (not used!) by the batch jobs. In general, most tools and applications from our Software Catalog can be used free of charge even if the program is burdened with a licence. Only in a few cases, you need to register and pay an additional fee in order to access special applications. All information are is reported in the description of the specific application: see application-software-science.

In order to run a batch job, a user must login to an HPC system using his/her username and password. The username must be associated with one or more active projects (Accounts) with available budgets.

Usernames and Accounts

In CINECA, the words "username" and "account" have different meanings.

...

The mapping between users and Accounts is done by the CINECA staff, who is in charge of creating new projects and associating a PI to each of them. A PI, in turn, can associate other users to a project as collaborators, via the UserDB page related to the project.

The "saldo" command

You can list all the Accounts attached to your username on the current cluster, together with the "budget" and the consumed resources, with the command:

...

For more information run the "saldo" command without any option.

Billing policy

The time spent in interactive work is not considered by the billing procedures, meaning it is free of charge.

...

Please note that every cluster has usually a "serial" queue, defined on front-end nodes, that allows for serial jobs for a short time limit (maximum 4 hours). On these queues, accounting is not enabled, meaning that you can use them without being charged. As a consequence, serial queues are allowed to be used also when an account is expired or has exhausted all of its budget: it is useful for example for post-processing or data transfer.

"Non-exclusive" nodes: memory matters!

On some clusters (for example on GALILEO or MARCONI100) you can choose to allocate for your job only part of the node. You are not forced to allocate all of it as it happens in clusters (like MARCONI) running in exclusive mode. In this case, the accounting procedure takes also into account the amount of memory you request for your job. If you ask for an amount of memory that is larger than the equivalent number of cores requested, the jobs will be billed for a larger number of cores than the ones you have reserved.
The billing always follows the basic idea illustrated above, but a generalized parameter for the number of reserved cores, accounting for the memory request, is now used:

...

This rule applies for each cluster based on its amount of total memory and cores.

Accounting and accelerators

Recently, accounting system has been extended to nodes equipped with accelerators. The principle is the same as the memory accounting: asking for a number of accelerators that will make you allocate a bigger portion of the node than what is suggested by the simple number of cores requested, it will increase the consumption accordingly. 

...

  • cpus=24, gpus=1 ==> the number of GPUs requested is equal as having requested 18 CPUs, but since 24 of them have been requested in the standard way, they are not taken into account. Thus 24 CPUs will be billed;
  • cpus=6, gpu=1 ==>  the number of GPUs requested is equal as having requested 18 CPUs, which is higher than the number of CPUs requested. Thus 18 CPUs will be billed;
  • cpus=24, gpus=2 ==> the number of GPUs requested is equal as having requested 36 CPUs, while 24 of them have been requested in the standard way, and they are not enough to cover for the GPU request. Therefore 36 CPUs will be billed;
  • cpus=24, gpus=1,mem=115GB ==> the situation is similar to the first example (so 24 CPUs billed), but the memory request is higher than what is guaranteed by the simple allocation of the CPUs or GPUs, since it is equivalent of allocating the entire node. So, 36 CPUs will be billed.

Low priority production jobs for active projects with exhausted budget

Non-expired projects with exhausted budgets may be allowed to keep using the computational resources at the cost of minimal priority. Ask superc@cineca.it to motivate your request and, in case of a positive evaluation, you will be enabled to use the qos_lowprio QOS:

...

  #SBATCH -A <account>               # your non expired, exhausted account 

Budget linearization

A linearization policy for the usage of project budgets is active on all clusters at Cineca. For each account, a monthly quota is defined as (total_budget / total_no_of_months). Starting from the first day of each month, the collaborators of any account are allowed to use the quota at full priority. As long as the budget is consumed, the jobs submitted from the account will gradually lose priority, until the monthly budget is fully consumed. At that moment, their jobs will still be considered for execution (so it is possible to consume more than the monthly quota), but with a lower priority than the jobs from accounts that still have some quota left.

...