(updated: July 2023)
In the following, you can find a simple and quick start guide for users new to HPC systems and expert users who would like to use our systems.
It describes all the steps to be followed to access our systems up to the first job submission.
This is a schematic guide with a few examples, and it is not intended to be complete. We always strongly recommend reading the full documentation that can be reached using the links you can find along with the text.
The first step is to get a username on our database and a password to enter our HPC clusters.
This step alone does not grant you access to our clusters. You also need to be associated with an account that has budget resources with "cpu-hours" to be used in the clusters. |
There are multiple ways to get an account budget (see also UG2.2 Become a User):
Once your username has been associated with an active account, you can request access to our HPC clusters by clicking on the button "Submit" on your HPC Access page. (The button appears only after you have been associated with an account). After we have granted you access, you will receive two emails: one with the username and another with the link to the page where you can configure the two-factor authentication (2FA) needed to access our clusters. You can follow the "How to activate the 2FA and configure the OTP" page with dedicated instructions on how to configure the 2FA. You will also have to install a smallstep client ("How to install the smallstep client") on your personal PC to download a temporary certificated needed for the login. During 2FA configuration you will be also asked to set a password. Please you can find our policy about password definition at the dedicated page. Important: The link for the configuration of the 2FA has a duration of only 24 hours. After its expiration you need to write to superc@cineca.it to get a new valid link. Remember: Login credentials are to be considered strictly personal, meaning that NO SHARING between members of the same working group is expected to happen. Every single user entitled with login credentials is to be considered personally responsible for any misuse that should take place. |
Once you configured the 2FA, you can login to the cluster in which the budget account you are associated with is active, by:
> ssh <username>@login.<cluster>.cineca.it (Windows users can find here the instructions)
Available clusters are Galileo100, Marconi, and Leonardo (UG3.0 System specific guides). Important: You can login only to clusters where you have active budgets on it. |
On the cluster, you can keep an eye on your budget of hours using the command: > saldo -b This lists all the accounts associated with your username on the current cluster, together with the "budget" and the consumed resources. One username can use multiple accounts, and one single account can be used by multiple usernames (and even on multiple platforms), all competing for the same budget. |
Some software, environments and compilers are already installed on the clusters and are available as modules (UG2.6 Production Environment). > modmap shows all the profiles, categories and modules that you can load. To load a module type: > module load <module_name> If the module is inside a specific profile, you have to load the profile before the module: > module load profile/<profile_name> There are many useful options for the functions "modmap" and "module" that are described here. You can also install your libraries and programs on your local folder by yourself or using python environment or spack manager. Please write to superc@cineca.it for any questions or requests about modules and software installations. |
Our HPC systems offer several options for data storage:
Important details and suggestions on how to use each space can be found on the "UG2.5 Data Storage and Filesystem" page. To monitor the occupancy of your space, you can use the "cindata" command |
Once you have your compiled program and prepared its input data, the last step is to run it on the cluster's compute nodes. Important: the node you are logged in is a login node and cannot be used to execute parallel programs. For longer runs, you need to use the "batch" mode. On CINECA machines we use the SLURM scheduler.
Congratulations! You have successfully executed your first job on our clusters. For any problem or question, please refer to our Help Desk writing to superc@cineca.it |