You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

In this page:


         

early availability:     20/09/2021

start of pre-production:  

start of production:   




Model: 
Architecture: 
Nodes: 77 OpenStack nodes
Processors:2xCPU 8260 Intel CascadeLake
Cores:24 cores/node, 2,4 GHz
RAM: 768GB DDR4
Internal Network: Ethernet 100GbE


System Architecture

The HPC cloud infrastructure, named ADA cloud is based on OpenStack Wallaby.

Provides:

  • 77 interactive OpenStack nodes each 2 x CPU Intel CascadeLake 8260,  with 24 cores each, 2,4 GHz, 768GB RAM and 2TB SSD storage.
  • 1 PB Ceph storage for Cloud (full NVMe/SSD)

This cloud infrastructure is tightly connected both to the LUSTRE storage of 20 PB raw capacity, and to the GSS storage of 6 PB seen by all other infrastructure. This setup enables the use of all available HPC systems (Tier-0 Marconi, Tier-1 Galileo100), addressing HPC workloads in conjunction with cloud resources.

Cloud Model

From the user's perspective, ADA cloud can be seen as both a public cloud and a community cloud, with a federation of European data-centers providing features targeting specific scientific communities (i.e. the flagship Human Brain project).  ADA cloud HPC infrastructure is a resource that CINECA already adopts in several internal projects and services.  The deployment model is well represented by the picture below.    

The ADA cloud HPC infrastructure integrates and completes the HPC ecosystem, providing a tightly-integrated infrastructure that covers both high performance and high flexible computing. We expect the flexibility of the cloud to better adapt to the diversity of user workloads, while still providing high-end computing power. If the need for High-Performance Computing increases, or scale beyond the ADA cloud HPC provision, the other world-class HPC systems (MARCONI, MARCONI100, GALILEO100) can be integrated into the workflow to cover all computing needs. For example, data can be stored on areas ($DRES ) that are seen by all HPC systems.  

Service model

ADA cloud HPC infrastructure provides users an Infrastructure as a Service (IaaS). Along with all the advantages in terms of flexibility, there is an increased responsibility shifted from CINECA staff to users. A clear separation of roles in using the service is represented in the scheme below. This has to be understood by all actors accessing the service,  even if we can provide assistance and share our expertise to help you set-up your application workflow.     



There are clear benefits in using a CLOUD infrastructure with access to Virtual Machines (VMs) with respect to traditional our HPC resources.  These benefits can be summarized in the table below:


PerformanceTarget the highest possibledepend on workload, but generally, virtualization has a small impact
User accessCINECA staff authorizationOnce a project is granted, it is managed by the user
Operating SystemIt is chosen by CINECA staff given the HW constraints. Security updates are managed by CINECA.Selected by the user. Security patch and updates are managed by the user.
Software stackMostly installed by CINECA staff. Users can install their own without "root" privilege. The environment  is provided "as is"The user is root on the VMs and can install all the required software stack. Users can modify the environment to suit their needs.
Snapshots of the environmentCannot be doneUser can save snapshot images of the VMs
Running simulationsUsers are provided with a job scheduler (SLURM)Users can install a job scheduler or chose alternatives. 


Flexible authentication model

A more flexible authentication method has been deployed in the CLOUD.HPC instance. It is based on OpenID (https://openid.net/connect/), and decouples authentication (access with credentials) from authorization (application permissions after user access), as represented in the schema below.

The Identity provider (IdP) can be internal (CINECA) or can be another trusted external service provider. This approach allows having in place federated identity, with a central (proxy) IdP servicing federated data-centers, as in the ICEI-Fenix model (https://fenix-ri.eu/).     


Roles and responsibilities  

In the context cloud HPC resources provisioning, CINECA acts accordingly to the following division of roles:   


 


  • CINECA is responsible for administering the physical infrastructure and providing the virtualization layer (via Openstack 

  • “User Admins and “Users are roles acted by people external to CINECA staff (Exceptions are made for internal services)User Admins can create VM instances and configure the resources via dashboard; Users do not access the dashboard and are local to each VM instance (for example those added via adduser linux command)   

Any user (“User Admins or “Users”) with administration privileges on IaaS resources (VMs) have the responsibility to maintain the security (security patch, fix) on those resources.  Anyway, from the project management perspective, CINECA will interact only to “User Admins" (User Admins are user associated to the project in CINECA resource provising portal, https://userdb.hpc.cineca.it).


How to create an instance of virtual machine in your Project

In order to create your own virtual machine you have to perform all the following eight steps


  1. Log in to the dashboard https://adacloud.hpc.cineca.it

    After subscribing, go to the OpenStack dashboard at https://adacloud.hpc.cineca.it, select "CINECA ldP" as Authentication method, then click on "cinsdai-idp.hpc.cineca.it:8443/auth/realms/CINECA_LDAP" and at the end insert your HPC-CINECA credetials to log in.

    After the log in, on the top-right  of the window is displayed your user name, while on the top-left. are listed in a menu all the Projects you are associated with.


    Projects are organizational units in the cloud. Each user is a member of one or more projects. Within a Project, a user can create and manage instances, security groups, volumes, images, and more.

    From the Project tab, you can view and manage the resources assigned in a particular project, including instances, images and volumes. You can select one of the project you are associated with the menu on the top-left side of the window.

  2. Check and configure the Internal Network in the Project

    In order to build and use virtual machine within a specific Project, it is mandatory the presence of the internal network, subnet and router.

    Select the Project of interest and check the presence of such components click on tab Project → Network → Network Topology.

    If it is present only the "external network", you must create network, subnet and router.  Please, follow the instruction below: 

    • Create a private network and subnet. 

    Click on: Project -> Network -> Network Topology -> Create Network.  

    Then set:

    • Tab Network:

    Network name: <the name you want>

    Enable Admin State: check

    Create Subnet: check

    Availability Zone Hints: set "nova"

    MTU:set it blank. The default is 1450

    • Tab subnet:

    Subnet name: <the name you want> 

    Network Address (eg. 192.168.0.0/24)

    IP Version (IPv4)

    Gateway IP (eg, the last address 192.168.0.254 for subnet 192.168.0.0/24)

    Disable Gateway: disabled, uncheck

    • Tab Subnet Details:

    Enable DHCP: enabled, check

    Allocation Pools: leave blank

    Host Routers: leave blank

    Finally, click on "create"


    • Create a private router and set the gateway. 

    Click on: Project -> Network -> Routers -> Create Router.

    Then set:

    Router name: <the name you want>

    Enable Admin State: check

    External Network: select "externalNetwork"

    Availability Zone Hints: leave "nova"

    Finally, click on "create router".

    Now, select the router just created and click on "Interfaces" and then on "Add interface"

    subnet: select the subnet just created

    IP address (write THE SAME IP ADDRESS of the gateway, in this example, it is 192.168.0.254)

    Finally, click on "Submit".

    Verify that the Status of router is “ACTIVE” and the Admin state is “UP”.

  3. Set up a keypairs

    Keypairs are used to access virtual machines when:

    1. the instance is launched using a default image for cloud (e.g. centos or ubuntu)
    2. in the virtual machine is set a login with ssh -key

    You can set up a keypair in two ways. From "Project →  Compute →  Key Pairs" menu, you can:

    • click on "Create Key Pair", to obtain a new key pair. The possible types are SSH key or x509.
    • click on "import Public Key" to import your key pair.

    Remember to modify the permission of the key file to 600 in order to avoid errors when you use it to login to your virtual machine.

  4. Set the security rules, that will be the firewall of your virtual machine

    The firewall of the virtual machine must be defined using the OpenStack Security Groups and Security Rules.

    Inside the virtual machine, the firewall must be disabled.

    A security rule defines which traffic is allowed to instances assigned to the security group.

    A security group is a group of security rules that can be assigned to an instance.

    The security groups and security rules can be created click on "Project →  Network → Security Groups ".

    Common default rules are: 

    • SSH (port 22)
    • ICMP (allow to "ping" a server)
    • HTTP (port 80)
    • HTTPS (port 443)

    Note: It is always possible to modify, add and remove security groups in a virtual machine after its creation.

    • If you modify a security group, adding or removing rules, and the security group is already associated to the virtual machine, the changes will be available in real time
    • If you want to add or remove a security group from a virtual machine, click on "Project → Compute →  Instances", select the virtual machine and from the menu on the right, click on "Edit Security Group". So, add or remove the security groups to the instance.

  5. Launch an instance of Linux virtual machine

    Once your key pair and your security group are defined, proceed building the virtual machine.

    • Click on "Project →  Compute → Instances"
    • Click on "Launch instance" button
    • In the "Details" box, enter:
      • the instance name
      • the instance number (count)
    • In the "Source" box, enter:
      • the boot source for the instance. It can be an image, a bootable volume or a bootable volume snapshot.  
        • Images: we provide some default images (centos, ubuntu, etc.).  For these default images, it is set a default user can login into the virtual machine using a key pair. Such a user can execute commands as root. The password of the user root is embedded.  If you want to use your personal image, you can create it in the cloud environment click on "Project → Compute → Images", the "Create Image" and upload it.
        • Note If you want to create a bootable volume from your instance, select  "yes" in "Create New Volume" and select the size of such volume.
    • In the "Flavor" box, select the flavor you want to use, accordingly with the resources you have.
        • NB: if you select to create a volume from your instance, the root disc of the virtual machine will have the size of the volume, not the size set in the flavor 
    • In the "Networks" box, enter the network internal to your project on which connect the virtual machine
    • In the "Security Groups" box, select the security groups you want. Remember that you can always modify them after the virtual machine creation.
    • In the "Key Pair" box, select the key pair you want to use for ssh login.

  6. Follow the boot process

    The boot process can be followed on the instances screen. Once the VM is in state ACTIVE, you will be able to open the console and follow the boot process. 

    To follow the installation, you can access the graphical console using the browser once the VM is in BUILD state.

    The console is accessed by selecting the "Instance Details" for the machine and then click on the tab "Console".


  7. Associate a Floating IP (FIP) to the virtual machine

    Where floating IPs are configured in a deployment, each project will have a limited number of floating IPs controlled by a quota. However these need to be allocated to the project from the central pool prior to their use.

    To allocacte a floating IP to a project, click on "Project → Network → Floating IPs", then click on the button "Allocate IP to project" on the right side of the dashboard page. Once allocated, a floating IP can be associated with running istances. Just click on "Associate" action on the right of the page. In the popup, select your virtual machine by the menu in "Port to be associated".
    The inverse action, Dissociate Floating IP, is available from the "Instances" page.

  8. Login to the virtual machine using ssh

    After the association of a Floating IP to your virtual machine, you can login using the default user and key ( if you have used a native default image for cloud), or using another username (if you have used your personal image with a custom user defined in it). Suppose you have used the default ubuntu cloud image, you can login as:

    $ ssh -i MyKey.pem ubuntu@<floating IP address> 





  • No labels