Introduction 

OpenStack Octavia service enables the deployment of load balancing solutions in OpenStack projects.  We refer to Octavia's official documentation for a full description of Octavia features.

Octavia service is not available by default in ADA Cloud projects. If you want to use it please ask access sending an email to superc@cineca.it

It is possible to create load balancers through the ADA Cloud Horizon Dashboard, or using procedures based on the OpenStack Command Line Interface (CLI) or infrastructure as code tools, such as Ansible, Terraform, or OpenTofu.   

In this guide, we exemplify the procedures based on (1) the ADA Cloud Horizon Dashboard and (2) the OpenStack CLI and the OpenStack Octavia plugin, for a simple use case: the deployment of a basic HTTP load balancer with an associated Floating-IP.  For further details and additional use cases, including step-by-step examples, refer to the Octavia Basic Load Balancing Cookbook. This resource provides comprehensive instructions on applying the same procedure to various scenarios. 

Load balancing

A load balancer functions as a traffic intermediary, directing network or application traffic to multiple server endpoints. It helps manage capacity during high traffic periods and enhances the reliability of applications. The main components of a load balancer are the following: 

  • Listener: The listener is a component that defines how incoming traffic is received. It listens for connection requests on a specific port and protocol (e.g., HTTP, HTTPS), and directs this traffic to the appropriate backend pool.
  • Pool: The pool is a collection of backend servers (also known as members) that receive and process the incoming traffic distributed by the load balancer. The pool determines the load balancing algorithm and health check policies to manage traffic distribution effectively.
  • Members: Members are the individual servers within a pool that handle the actual processing of the traffic. Each member represents a single endpoint (server) that performs the required tasks or services requested by the client.

A load balancer determines which server to send a request to based on a desired algorithms (e.g., Round Robin, Least Connections, Random). The choice among the load balancing algorithms depends on the requirements of the specific use case. 

How to deploy a basic HTTP load balancer with an associated Floating-IP

In this section, we will walk you through the steps to create a setup where two instances running nginx servers are connected to an HTTP load balancer. The load balancer will use the round-robin algorithm to evenly distribute incoming HTTP traffic across the two servers. 

Prerequisites 

1. Before creating a load balancer, ensure that the following resources are available in your tenant:

  • 1 network and subnet.
  • 1 router.
  • Desired security groups for the VMs. At the minimun, the Ingress rules for HTTP (port 80) and SSH (port 22). 
  • 2 instances. In our example, these VMs host  an nginx web server each.
  • At least 2 floating-IP: one associated to one of the VMs  and an additional one available to be associated to the load balancer. The internal IP of the second VM can be used to login from the first VM if needed for configuration.  Note that an additional Floating IP could be directly associated to the second VM. However, this would entail using an additional (and not strictly necessary) resource, which we try to avoid. 
  • A Key-pair. An SSH public key is needed to access the instances for their configuration.

If you need to create these resources, you can follow the ADA Cloud User Guide

2. You can setup a very simple nginx web server in each VM logging into each of them and running the following commands on the shell:

sudo apt-get update 
sudo apt-get install -y nginx && \
    echo "Hello! This is $(hostname)" > /var/www/html/index.html

Procedure with ADA Cloud Horizon Dashboard

1. Create the loadbalancer by clicking on "Project → Network → Load Balancers → "Create Load Balancer" and setting the following information.

    1. Load Balancer Details:
      • Name.
      • Subnet. Select the desired subnet.
    2. Listener Details:
      • Name.
      • Protocol and Port. The protocol defines the type of network traffic the listener will handle, while the port specifies the network port on which the listener will accept incoming traffic. In our example we select protocol HTTP and port 80.
    3. Pool Details:
      • Name.
      • Algorithm. The algorithm determines how traffic is distributed across the members. We select ROUND_ROBIN.
    4. Pool Members:
      • Add members. Choose the desired members among those available. We add VM-1 and VM-2, the names of the VMs in our example. 
      • Port. For each VM, specify the port number on which the member will receive traffic. In our case, we expose the nginx server on port 80.
      • Weight. The weight of the member for load balancing purposes. The weight determines the relative portion of requests the member should handle compared to others. We use the default value.
    5. Monitor Details: Decide whether you'd like to create a Health Monitor. In this example, we will not make use of a monitor. 

Once all the details are provided, click on "Create Load Balancer"

2. Make sure that the load balancer has a floating-PI associated to it. To associate a floating-IP to your brand new load balancer, move to the "Project → Network → Load Balancers" Section of the left hand menu of the dashboard. Then, display the options within the drop-down menu on the right side for the desired load balancer, and click on "Associate Floating IP".  Then, select the floating IP among those suggested in the drop-down menu "Floating IP adress or Pool". Finally, click on "Associate". The floating-IP associated to the load balancer appears in the overview of its characteristics. In order to see them, click on the name of your load balancer in the "Project → Network → Load Balancers" Section of the dashboard.

3. Test your load balancer.

Use this floating-IP to reach the nginx servers (see figure bellow). The traffic will be managed by the load balancer following the algorithm ROUND_ROBIN.

Procedure with OpenStack CLI

Environment requirements

1. You will need the OpenStack Client and the Octavia plugin.  

pip install python-octaviaclient

2. Follow the instructions provided in the ADA Cloud User Guide on how to setup your cloud environment and  the OpenStack CLI.

Procedure

1. Request to be enabled to the service

The user willing to make use of the Octavia service needs to send an email to superc@cineca.it asking for the tenant to be enabled to the service. Once the tenant is enabled to the service by the User Support Team, all users of the tenant will be able to use the service. 

2. Create a Load Balancer.

openstack loadbalancer create --name <loadbalancer_name> --vip-subnet-id <subnet_id>

The <subnet_id> can be found through the ADA Cloud Dashboard. On the main menu, select the Network → Networks tab. Then, click on your network and select the Subnets tab. Finally, click on the desired subnet. This information can also be gathered using the CLI.

Note: Wait until the creation is completed. It will take a while and the next steps will return an error if the load balancer is not available. 

3. Create a Listener.

openstack loadbalancer listener create --name <listener_name> --protocol HTTP --protocol-port 80 <loadbalancer_name>

4. Create a Pool.

openstack loadbalancer pool create --name <pool_name> --lb-algorithm <algorithm_e.g.ROUND_ROBIN> --listener <listener_name> --protocol HTTP

5. Add Members to the pools.

openstack loadbalancer member create --subnet-id  <subnet_id> --address <ip_vm_1> --protocol-port 80 <pool_name>

openstack loadbalancer member create --subnet-id <subnet_id> --address <ip_vm_2> --protocol-port 80 <pool_name>

You can find the IPs  of the VMs in the Compute → Instances section of the lateral menu of ADA Cloud Dashboard. The IPs can also be gathered using the CLI.

6. Associate an available Floating IP to the load balancer

openstack floating ip set --port <port_id> <floating_ip> 

You can find the <port_id> if you navigate to the description of your load balancer on the ADA Cloud Dashboard. Select the Network → Load Balancers section on the left-hand menu of the dashboard. Then, click on your load balancer and go to the Overview tab.  You can also use the CLI to identify the value of the <port_id>.

7. Test your load balancer.

Finally, use this floating-IP to reach the nginx servers (see figure bellow). The traffic will be managed by the load balancer following the algorithm <algorithm_e.g.ROUND_ROBIN> (see Step 3). 

Servers reached using the load balancer floatting-IP


  • No labels