For information about data transfer from other computers please follow the instructions and caveats on the dedicated section Data storage or the document Data Management.
Disks and Filesystems
The storage organization conforms to the CINECA infrastructure (see Section "Data storage and Filesystems") . In addition to the home directory ($HOME), for each user is defined a scratch area $CINECA_SCRATCH, a large disk for storing run time data and files. The new variable $SCRATCH is also available, resolving in the same path of $CINECA_SCRATCH. $WORK is defined for each active project on the system, reserved for all the collaborators of the project. This is a safe storage area to keep run time data for the whole life of the project.
The filesystem organization is based on LUSTRE open source parallel file system.
|Total Dimension (TB)||Quota (GB)||Notes|
|$HOME||100T||50 GB quota per user|
$SCRATCH (on G100)
|$WORK||2PB||1TB quota per project|
It is also available a temporary storage local on compute nodes generated when the job starts and accessible via environment variable $TMPDIR. For more details please see the dedicated section of UG2.5: Data storage and FileSystems. On Galileo100 the $TMPDIR local area has 293 GB of available space.
$DRES points to the shared repository where Data RESources are maintained. This is a data archive area available only on-request, shared with all CINECA HPC systems and among different projects.
$DRES is not mounted on the compute nodes. This means that you can't access it within a batch job: all data needed during the batch execution has to be moved on $WORK or $CINECA_SCRATCH before the run starts.
Use the local command "cindata" to query for disk usage and quota ("cindata -h" for help):
or the tool "cinQuota" available in the module cintools
For more details about both these commands, please consult the section dedicated to how to monitor the occupancy
Dedicated node for Data transfer and download
A time limit of 10 cpu-minutes for processes running on login nodes has been set.
For Data transfer or download that may require more time, we set up a dedicated "data" VM accessible with a dedicated alias.
Login via ssh to this VM is not allowed. Environment variables as $HOME or $WORK are not defined, so you always have to make the complete path to the files you need to copy explicit.
For example to copy data to Galileo100 using rsync you can run the following command:
rsync -PravzHS </data_path_from/file> <your_username>@data.g100.cineca.it:<complete_data_path_to>
You can also use the "data" VM onto login nodes to move data from Galileo100 to another location with public IP:
ssh -xt data.g100.cineca.it rsync -PravzHS <complete_data_path_from/file> </data_path_to>
this command will open a session on the VM that will not be closed until the rsync command is completed.
In similar ways you can use also scp and sftp commands if you prefer them.
The data VM also offers wget and curl commands. If you need to wget a large amount of data for a public site, you can run the following command:
ssh -xt data.g100.cineca.it wget <url/file> -P </data_path_to>
this command will open a session on the VM that will not be closed until the wget command is completed.
The software modules are collected in different profiles and organized by functional category (compilers, libraries, tools, applications,..).
On GALILEO100 the profiles are of two types, “domain” type (bioinf, chem-phys, lifesc,..) for the production activity and “programming” type (base and advanced) for compilation, debugging and profiling activities and that they can be loaded together.
"Base" profile is the default. It is automatically loaded after login and it contains basic modules for the programming activities (intel e gnu compilers, math libraries, profiling and debugging tools,..).
If you want to use a module placed under other profiles, for example an application module, you will have to load preventively the corresponding profile:
>module load profile/<profile name>
>module load autoload <module name>
For listing all profiles you have loaded you can use the following command:
In order to detect all profiles, categories and modules available on GALILEO100 the command “modmap” is available:
With modmap you can see if the desired module is available and which profile you have to load to use it.
>modmap -m <module name>
Spackenvironment - will be available soon
In case you don't find a software you are interested in, you can install it by yourself.
In this case, on GALILEO100 we also offer the possibility to use the “spack” environment by loading the corresponding module. Please refer to the dedicated section in UG2.6: Production Environment.