Introduction
Octavia service enables the deployment of load balancing solutions in OpenStack projects. We refer to Octavia's official documentation for a full description of Octavia features. Octavia service is not available by default in ADA Cloud projects, but it is enabled on individual projects upon request.
Due to a current malfunctioning of OpenStack Horizon, it is not possible to create load balancers through the ADA Cloud Dashboard. The users can still deploy load balancing solutions using alternative procedures based on the OpenStack Command Line Interface (CLI) or infrastructure as code tools, such as Ansible, Terraform, or OpenTofu.
In this guide, we exemplify the procedure based on the OpenStack CLI and the OpenStack Octavia plugin with a simple use case: the deployment of a basic HTTP load balancer with an associated Floating-IP. For further details and additional use cases, including step-by-step examples, refer to the Octavia Basic Load Balancing Cookbook. This resource provides comprehensive instructions on applying the same procedure to various scenarios.
Load balancing
A load balancer functions as a traffic intermediary, directing network or application traffic to multiple server endpoints. It helps manage capacity during high traffic periods and enhances the reliability of applications. The main components of a load balancer are the following:
- Listener: The listener is a component that defines how incoming traffic is received. It listens for connection requests on a specific port and protocol (e.g., HTTP, HTTPS), and directs this traffic to the appropriate backend pool.
- Pool: The pool is a collection of backend servers (also known as members) that receive and process the incoming traffic distributed by the load balancer. The pool determines the load balancing algorithm and health check policies to manage traffic distribution effectively.
- Members: Members are the individual servers within a pool that handle the actual processing of the traffic. Each member represents a single endpoint (server) that performs the required tasks or services requested by the client.
A load balancer determines which server to send a request to based on a desired algorithms (e.g., Round Robin, Least Connections, Random). The choice among the load balancing algorithms depends on the requirements of the specific use case.
How to deploy a basic HTTP load balancer with an associated Floating-IP
In this section, we will walk you through the steps to create a setup where two instances running nginx servers are connected to an HTTP load balancer. The load balancer will use the round-robin algorithm to evenly distribute incoming HTTP traffic across the two servers.
Environment requirements
You will need the OpenStack Client and the Octavia plugin.
pip install python-octaviaclient
Follow the instructions provided in the AdaCloud User Guide on how to setup your cloud environment and the OpenStack CLI.
Prerequisites
Before creating a load balancer, ensure that the following resources are available in your tenant:
- 1 network.
- 1 router.
- Desired security groups for the VMs.
- 2 instances. In our example, these VMs host an nginx web server each.
- At least 2 floating-IP: one associated to one of the VMs and an additional one available to be associated to the load balancer. The internal Ip of the second VM can be used to login from the first VM if needed for configuration. Note that an additional floatting-IP could be directly associated to the second VM. However, this would entail using an additional (and not strictly necessary) resource, which we try to avoid.
If you need to create these resources, you can follow our user guides.
You can set a very simple nginx web server in each VM loging into each of them and running the following commands on the shell:
sudo apt-get update sudo apt-get install -y nginx && \ echo "Hello! This is $(hostname)" > /var/www/html/index.html
Procedure
1. Request to be enabled to the service
The user willing to make use of the Octavia service needs to send an email to superc@cineca.it asking for the tenant to be enabled to the service. Once the tenant is enabled to the service by the User Support Team, all users of the tenant will be able to use the service.
2. Create a Load Balancer
openstack loadbalancer create --name <loadbalancer_name> --vip-subnet-id <subnet_id>
The <subnet_id> can be found through the ADA Cloud Dashboard. On the main menu, select the Network → Netwroks tab. Then, click on your network and select the Subnets tab. Finally, click on the desired subnet. This information can also be gathered using the CLI.
Note: Wait until the creation is completed. It will take a while and the next steps will return an error if the load balancer is not available.
3. Create a Listener
openstack loadbalancer listener create --name <listener_name> --protocol HTTP --protocol-port 80 <loadbalancer_name>
4. Create a Pool
openstack loadbalancer pool create --name <pool_name> --lb-algorithm <algorithm_e.g.ROUND_ROBIN> --listener <listener_name> --protocol HTTP
5. Add Members to the pools
openstack loadbalancer member create --subnet-id <subnet_id> --address <ip_vm_1> --protocol-port 80 <pool_name> openstack loadbalancer member create --subnet-id <subnet_id> --address <ip_vm_2> --protocol-port 80 <pool_name>
You can find the IPs of the VMs in the Compute → Instances section of the lateral menu of ADA Cloud Dashboard. The IPs can also be gathered using the CLI.
6. Associate an available Floating IP to the load balancer.
openstack floating ip set --port <port_id> <floating_ip>
You can find the <port_id> if you navigate to the description of your load balancer on the ADA Cloud Dashboard. Select the Network → Load Balancers section on the left-hand menu of the dashboard. Then, click on your load balancer and go to the Overview tab. You can also use the CLI to identify the value of the <port_id>.
Finally, use this floating-IP to reach the nginx servers (see figure bellow). The traffic will be managed by the load balancer following the algorithm <algorithm_e.g.ROUND_ROBIN> (see Step 3).