When you create a service, you can optionally create a load balancer to distribute service traffic among the nodes assigned to that service. The key fields in the configuration of a load balancer are the type of service being created and the ports that the load balancer will listen to.
Creating Load Balancers to Distribute HTTP Traffic
Consider the following configuration file,
nginx_lb.yaml, which defines a deployment (
kind: Deployment) for the
nginx app, followed by a service definition with a type of LoadBalancer (
type: LoadBalancer) uses type LoadBalancer to balance http traffic on port 80 for traffic to the
apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx
The first part of the configuration file defines an Nginx deployment, requesting that it be hosted on 3 pods running the nginx:1.7.9 image, and accept traffic to the containers on port 80.
The second part of the configuration file defines the Nginx service, which uses type LoadBalancer to balance Nginx traffic on port 80 amongst the available pods.
To create the deployment and service defined in
nginx_lb.yaml while connected to your Kubernetes cluster, enter the command:
$ kubectl apply -f nginx_lb.yaml
This command outputs the following upon successful creation of the deployment and the load balancer:
deployment "my-nginx" created service "my-nginx-svc" created
The load balancer may take a few minutes to go from a pending state to being fully operational. You can view the current state of your cluster by entering
kubectl get all, where your output looks similar to the following:
$ kubectl get all NAME READY STATUS RESTARTS AGE po/my-nginx-431080787-0m4m8 1/1 Running 0 3m po/my-nginx-431080787-hqqcr 1/1 Running 0 3m po/my-nginx-431080787-n8125 1/1 Running 0 3m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 203.0.113.1 <NONE> 443/TCP 3d svc/my-nginx-svc 203.0.113.7 192.0.2.22 80:30269/TCP 3m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/my-nginx 3 3 3 3 3m NAME DESIRED CURRENT READY AGE rs/my-nginx-431080787 3 3 3 3m
The output shows that the
my-nginx deployment is running on 3 pods (the po/my-nginx entries), that the load balancer is running (svc/my-nginx-svc) and has an external IP (192.0.2.22) that clients can use to connect to the app that's deployed on the pods.
Creating Load Balancers with SSL Support to Distribute HTTPS Traffic
You can create a load balancer with SSL termination, allowing https traffic to an app to be distributed among the nodes in a cluster. This example provides a walkthrough of the configuration and creation of a load balancer with SSL support.
Consider the following configuration file,
nginx-demo-svc-ssl.yaml, which defines an Nginx deployment and exposes it via a load balancer that serves http on port 80, and https on port 443. This sample creates an Oracle Cloud Infrastructure load balancer, by defining a service with a type of LoadBalancer (
apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: nginx-service annotations: service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret spec: selector: app: nginx type: LoadBalancer ports: - name: http port: 80 targetPort: 80 - name: https port: 443 targetPort: 80
The Load Balancer's annotations are of particular importance. The ports on which to support https traffic are defined by the value of oci-load-balancer-ssl-ports. You can declare multiple SSL ports by using a comma-separated list for the annotation's value. For example, you could set the annotation's value to "443, 3000" to support SSL on ports 443 and 3000.
The required TLS secret, ssl-certificate-secret, needs to be created in Kubernetes. This example creates and uses a self-signed certificate. However, in a production environment, the most common scenario is to use a public certificate that's been signed by a certificate authority.
The following command creates a self-signed certificate,
tls.crt, with its corresponding key,
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Now that you created the certificate, you need to store both it and its key as a secret in Kubernetes. The name of the secret must match the name from the oci-load-balancer-tls-secret annotation of the load balancer's definition. Use the following command to create a TLS secret in Kubernetes, whose key and certificate values are set by
$ kubectl create secret tls ssl-certificate-secret --key tls.key --cert tls.crt
You must create the Kubernetes secret before you can create the service, since the service references the secret in its definition. Create the service using the following command:
$ kubectl create -f manifests/demo/nginx-demo-svc-ssl.yaml
Watch the service and wait for a public IP address (EXTERNAL-IP) to be assigned to the Nginx service (nginx-service). This is the load balancer IP to use to connect to the service.
$ kubectl get svc --watch NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service 192.0.2.1 198.51.100.1 80:30274/TCP 5m
The load balancer is now running, which means the service can now be accessed using either http or https, as demonstrated by the following commands:
$ curl http://198.51.100.1 $ curl --insecure https://198.51.100.1
--insecure" flag is used to access the service using https due to the use of self-signed certificates in this example. Do not use this flag in a production environment where the public certificate was signed by a certificate authority.
Note: When a cluster is deleted, a load balancer that's dynamically created when a service is created will not be removed. Before deleting a cluster, delete the service, which in turn will result in the cloud provider removing the load balancer. The syntax for this command is:
$ kubectl delete svc SERVICE_NAME
For example, to delete the service from the previous example, enter:
$ kubectl delete svc nginx-service
Specifying Alternative Load Balancer Shapes
The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including 400Mbps and 8000Mbps. To specify an alternative shape for a load balancer, add an annotation in the metadata section of the manifest file.
apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations: service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps spec: type: LoadBalancer ports: - port: 80 selector: app: nginx
Note: Sufficient load balancer quota must be available in the region for the shape you specify. Enter the following kubectl command to confirm that load balancer creation did not fail due to lack of quota:
$ kubectl describe service <service-name>