You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Octavia service enables the deployment of load balancing solutions in OpenStack projects.  We refer to Octavia's official documentation for a full description of Octavia features. 

Due to a current malfunctioning of OpenStack Horizon, it is not possible to create load balancers through the ADA Cloud Dashboard. The users can still deploy load balancing solutions using alternative procedures based on the OpenStack Command Line Interface (CLI) or infrastructure as code tools, such as Ansible,  Terraform, or OpenTofu.   

In this guide, we exemplify the procedure based on the OpenStack CLI and the OpenStack Octavia plugin with a simple use case: the deployment of a basic HTTP load balancer with an associated Floating-IP.  For further details and additional use cases, including step-by-step examples, refer to the Octavia Basic Load Balancing Cookbook. This resource provides comprehensive instructions on applying the same procedure to various scenarios. 

Load balancing

A load balancer functions as a traffic intermediary, directing network or application traffic to multiple server endpoints. It helps manage capacity during high traffic periods and enhances the reliability of applications. The main components of a load balancer are the following: 

  • Listener: The listener is a component that defines how incoming traffic is received. It listens for connection requests on a specific port and protocol (e.g., HTTP, HTTPS), and directs this traffic to the appropriate backend pool.
  • Pool: The pool is a collection of backend servers (also known as members) that receive and process the incoming traffic distributed by the load balancer. The pool determines the load balancing algorithm and health check policies to manage traffic distribution effectively.
  • Members: Members are the individual servers within a pool that handle the actual processing of the traffic. Each member represents a single endpoint (server) that performs the required tasks or services requested by the client.

A load balancer determines which server to send a request to based on a desired algorithms (e.g., Round Robin, Least Connections, Random). The choice among the load balancing algorithms depends on the requirements of the specific use case. 

How to deploy a basic HTTP load balancer with an associated Floating-IP

In this section, we will walk you through the steps to create a setup where two instances running nginx servers are connected to an HTTP load balancer. The load balancer will use the round-robin algorithm to evenly distribute incoming HTTP traffic across the two servers. 

Environment requirements

You will need the OpenStack Client and the Octavia plugin.  

pip install python-octaviaclient

Follow the instructions provided in the AdaCloud User Guide on how to  setup your cloud environment and  the OpenStack CLI.

Prerequisites 

Before creating a load balancer, ensure that the following resources are available in your tenant:

  • 1 network.
  • 1 router.
  • Desired security groups for the VMs. 
  • 2 instances (nginx servers).
  • At least 3 floating IP: two associated to the VMs and an additional one available for the load balancer.

If you need to create these resources, you can follow our user guides

Procedure

1. Create a Load Balancer

openstack loadbalancer create --name <loadbalancer_name> --vip-subnet-id <subnet_id>

The <subnet_id> can be found through the ADA Cloud Dashboard. On the main menu, select the Network → Netwroks tab. Then, click on your network and select the Subnets tab. Finally, click on the desired subnet. This information can also be gathered using the CLI.

Note: Wait until the creation is completed. It will take a while and the next steps will return an error if the load balancer is not available. 

2. Create a Listener

openstack loadbalancer listener create --name <listener_name> --protocol HTTP --protocol-port 80 <loadbalancer_name>

3. Create a Pool

openstack loadbalancer pool create --name <pool_name> --lb-algorithm <algorithm_e.g.ROUND_ROBIN> --listener <listener_name> --protocol HTTP

4. Add Members

openstack loadbalancer member create --subnet-id  <subnet_id> --address <ip_vm_1> --protocol-port 80 <pool_name>

openstack loadbalancer member create --subnet-id <subnet_id> --address <ip_vm_2> --protocol-port 80 <pool_name>

You can find the IPs  of the VMs in the Compute → Instances section of the lateral menu of ADA Cloud Dashboard. The IPs can also be gathered using the CLI.

5. Associate an available Floating IP to the load balancer

openstack floating ip set --port <port_id> <floating_ip> 

You can find the <port_id> if you navigate to the description of your load balancer on the ADA Cloud Dashboard. Select the Network → Load Balancers section on the left-hand menu of the dashboard. Then, click on your load balancer and go to the Overview tab.  You can also use the CLI to identify the value of the <port_id>.

Finally, use this floating-IP to reach the nginx servers (see figure bellow). The traffic will be managed by the load balancer following the algorithm <algorithm_e.g.ROUND_ROBIN> (see Step 3). 

Servers reached using the load balancer floatting-IP


  • No labels