Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In this guide, we provide users with an alternative procedure based on the OpenStack CLI and the OpenStack Octavia plugin following the Basic Load Balancing Cookbook

Load balancing

A load balancer functions as a traffic intermediary, directing network or application traffic to multiple server endpoints. It helps manage capacity during high traffic periods and enhances the reliability of applications. The main components of a load balancer are the following: 

  • Listener: The listener is a component that defines how incoming traffic is received. It listens for connection requests on a specific port and protocol (e.g., HTTP, HTTPS), and directs this traffic to the appropriate backend pool.
  • Pool: The pool is a collection of backend servers (also known as members) that receive and process the incoming traffic distributed by the load balancer. The pool determines the load balancing algorithm and health check policies to manage traffic distribution effectively.
  • Members: Members are the individual servers within a pool that handle the actual processing of the traffic. Each member represents a single endpoint (server) that performs the required tasks or services requested by the client.

...

In this section, we will walk you through the steps to create a setup where two instances running nginx servers are connected to an HTTP load balancer. The load balancer will use the round-robin algorithm to evenly distribute incoming HTTP traffic across the two servers. This configuration will help distribute incoming traffic across the two servers. 

Environment requirements

...

You can use this IP to reach the nginx servers (see figure bellow). The traffic will be managed by the load balancer following the algorithm <algorithm_e.g.ROUND_ROBIN> (see Step 3). 

NGNIX servers reached using the load balancer floatting-IPImage Modified