Load Balancer Policies

Describes how you can apply load balancer policies to control traffic distribution to your backend servers.

After you create a load balancer, you can apply policies to control traffic distribution to your backend servers. The Load Balancer service supports three primary policy types:

When processing load or capacity varies among backend servers, you can refine each of these policy types with backend server weighting. Weighting affects the proportion of requests directed to each server. For example, a server weighted '3' receives three times the number of connections as a server weighted '1.' You assign weights based on criteria of your choosing, such as each server's traffic-handling capacity. Weight values must be from 1 to 100.

Load balancer policy decisions apply differently to TCP load balancers, cookie-based session persistent HTTP requests (sticky requests), and non-sticky HTTP requests.

  • A TCP load balancer considers policy and weight criteria to direct an initial incoming request to a backend server. All subsequent packets on this connection go to the same endpoint.

  • An HTTP load balancer configured to handle cookie-based session persistence forwards requests to the backend server specified by the cookie's session information.

  • For non-sticky HTTP requests, the load balancer applies policy and weight criteria to every incoming request and determines an appropriate backend server. Multiple requests from the same client could be directed to different servers.

Note

If you want to create a load Balancer with a reserve IP, add this policy:

Allow group group_name to manage floating-ips in tenancy

See Getting Started with Policies for general information on policies.

Round Robin

Round Robin is the default load balancer policy. This policy distributes incoming traffic sequentially to each server in a backend set list. After each server has received a connection, the load balancer repeats the list in the same order.

Round Robin is a simple load balancing algorithm. It works best when all the backend servers have similar capacity and the processing load required by each request does not vary significantly.

Least Connections

The Least Connections policy routes incoming non-sticky request traffic to the backend server with the fewest active connections. This policy helps you maintain an equal distribution of active connections with backend servers. As with the round robin policy, you can assign a weight to each backend server and further control traffic distribution.

Note

In TCP use cases, a connection can be active but have no current traffic. Such connections do not serve as a good load metric.

IP Hash

The IP Hash policy uses an incoming request's source IP address as a hashing key to route non-sticky traffic to the same backend server. The load balancer routes requests from the same client to the same backend server as long as that server is available. This policy honors server weight settings when establishing the initial connection.

IP Hash ensures that requests from a particular client are always directed to the same backend server, as long as the backend server is available.

You cannot add a backend server marked as Backup to a backend set that uses the IP Hash policy.

Important

Multiple clients that connect to the load balancer through a proxy or NAT router appear to have the same IP address. If you apply the IP Hash policy to your backend set, the load balancer routes traffic based on the incoming IP address and sends these proxied client requests to the same backend server. If the proxied client pool is large, the requests could flood a backend server.