Example: Setting Up an Ingress Controller on a Cluster

You can set up different open source ingress controllers on clusters you have created with Container Engine for Kubernetes.

This topic explains how to set up an example ingress controller along with corresponding access control on an existing cluster. Having set up the ingress controller, this topic describes how to use the ingress controller with an example hello-world backend, and how to verify the ingress controller is working as expected.

Example Components

The example includes an ingress controller and a hello-world backend.

Ingress Controller Components

The ingress controller comprises:

  • An ingress controller deployment called nginx-ingress-controller. The deployment deploys an image that contains the binary for the ingress controller and Nginx. The binary manipulates and reloads the /etc/nginx/nginx.conf configuration file when an ingress is created in Kubernetes. Nginx upstreams point to services that match specified selectors.
  • An ingress controller service called ingress-nginx. The service exposes the ingress controller deployment as a LoadBalancer type service. Because Container Engine for Kubernetes uses an Oracle Cloud Infrastructure integration/cloud-provider, a load balancer will be dynamically created with the correct nodes configured as a backend set.

Backend Components

The hello-world backend comprises:

  • A backend deployment called docker-hello-world. The deployment handles default routes for health checks and 404 responses. This is done by using a stock hello-world image that serves the minimum required routes for a default backend.
  • A backend service called docker-hello-world-svc.The service exposes the backend deployment for consumption by the ingress controller deployment.

Setting Up the Example Ingress Controller

In this section, you create the access rules for ingress. You then create the example ingress controller components, and confirm they are running.

Creating the Access Rules for the Ingress Controller

  1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access.
  2. If your Oracle Cloud Infrastructure user is a tenancy administrator, skip the next step and go straight to Creating the Service Account, and the Ingress Controller.
  3. If your Oracle Cloud Infrastructure user is not a tenancy administrator, in a terminal window, grant the user the Kubernetes RBAC cluster-admin clusterrole on the cluster by entering:

    $ kubectl create clusterrolebinding <my-cluster-admin-binding> --clusterrole=cluster-admin --user=<user-OCID>


    • <my-cluster-admin-binding> is a string of your choice to be used as the name for the binding between the user and the Kubernetes RBAC cluster-admin clusterrole. For example, jdoe_clst_adm
    • <user-OCID> is the user's OCID (obtained from the Console ). For example, ocid1.user.oc1..aaaaa...zutq (abbreviated for readability).

    For example:

    $ kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq

Creating the Service Account, and the Ingress Controller

  1. Run the following command to create the nginx-ingress-controller ingress controller deployment, along with the Kubernetes RBAC roles and bindings:

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
  2. Create and save the file cloud-generic.yaml containing the following code to define the ingress-nginx ingress controller service as a load balancer service:

    kind: Service
    apiVersion: v1
      name: ingress-nginx
      namespace: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      type: LoadBalancer
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        - name: http
          port: 80
          targetPort: http
        - name: https
          port: 443
          targetPort: https
  3. Using the file you just saved, create the ingress-nginx ingress controller service by running the following command:

    $ kubectl apply -f cloud-generic.yaml

Verifying the ingress-nginx Ingress Controller Service is Running as a Load Balancer Service

  1. View the list of running services:

    $ kubectl get svc -n ingress-nginx
    NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                       AGE
    ingress-nginx   LoadBalancer   <pending>      80:30756/TCP,443:30118/TCP    1h

    The EXTERNAL-IP for the ingress-nginx ingress controller service is shown as <pending> until the load balancer has been fully created in Oracle Cloud Infrastructure.

  2. Repeat the kubectl get svc command until an EXTERNAL-IP is shown for the ingress-nginx ingress controller service:

    $ kubectl get svc -n ingress-nginx
    NAME            TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)                       AGE
    ingress-nginx   LoadBalancer   80:30756/TCP,443:30118/TCP    1h

Creating the TLS Secret

A TLS secret is used for SSL termination on the ingress controller. To generate the secret for this example, a self-signed certificate is used. While this is okay for testing, for production, use a certificate signed by a Certificate Authority.

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Under Windows, you may need to replace "/CN=nginxsvc/O=nginxsvc" with "//CN=nginxsvc\O=nginxsvc" . For example, this is necessary if you run the openssl command from a Git Bash shell.

Setting Up the Example Backend

In this section, you define a hello-world backend service and deployment.

Creating the docker-hello-world Service Definition

  1. Create the file hello-world-ingress.yaml containing the following code. This code uses a publicly available hello-world image from Docker Hub. You can substitute another image of your choice that can be run in a similar manner.

    apiVersion: apps/v1
    kind: Deployment
      name: docker-hello-world
        app: docker-hello-world
          app: docker-hello-world
      replicas: 3
            app: docker-hello-world
          - name: docker-hello-world
            image: scottsbaldwin/docker-hello-world:latest
            - containerPort: 80
    apiVersion: v1
    kind: Service
      name: docker-hello-world-svc
        app: docker-hello-world
        - port: 8088
          targetPort: 80
      type: ClusterIP

    Note the docker-hello-world service's type is ClusterIP, rather than LoadBalancer, because this service will be proxied by the ingress-nginx ingress controller service. The docker-hello-world service does not need public access directly to it. Instead, the public access will be routed from the load balancer to the ingress controller, and from the ingress controller to the upstream service.

  2. Create the new hello-world deployment and service on nodes in the cluster by running the following command:

    $ kubectl create -f hello-world-ingress.yaml

Using the Example Ingress Controller to Access the Example Backend

In this section you create an ingress to access the backend using the ingress controller.

Creating the Ingress Resource

  1. Create the file ingress.yaml and populate it with this code:

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
      name: hello-world-ing
        kubernetes.io/ingress.class: "nginx"
      - secretName: tls-secret
      - http:
          - backend:
              serviceName: docker-hello-world-svc
              servicePort: 8088
  2. Create the resource:

    $ kubectl create -f ingress.yaml

Verifying that the Example Components are Working as Expected

In this section, you confirm that all of the example components have been successfully created and are operating as expected. The docker-hello-world-svc service should be running as a ClusterIP service, and the ingress-nginx service should be running as a LoadBalancer service. Requests sent to the ingress controller should be routed to nodes in the cluster.

Obtaining the External IP Address of the Load Balancer

To confirm the ingress-nginx service is running as a LoadBalancer service, obtain its external IP address:

$ kubectl get svc --all-namespaces
NAMESPACE       NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default         docker-hello-world-svc   ClusterIP    <none>           8088/TCP                     16s
default         kubernetes               ClusterIP       <none>           443/TCP                      1h
ingress-nginx   ingress-nginx            LoadBalancer  80:30756/TCP,443:30118/TCP   5m			
kube-system     kube-dns                 ClusterIP       <none>           53/UDP,53/TCP                1h
kube-system     kubernetes-dashboard     ClusterIP    <none>           443/TCP                      1h
kube-system     tiller-deploy            ClusterIP    <none>           44134/TCP                    1h

Sending cURL Requests to the Load Balancer

  1. Use the external IP address of the ingress-nginx service (for example, to curl an http request:

    $ curl -I
    HTTP/1.1 301 Moved Permanently
    Via: 1.1 (McAfee Web Gateway
    Date: Thu, 07 Sep 2017 15:20:16 GMT
    Server: nginx/1.13.2
    Content-Type: text/html
    Content-Length: 185
    Proxy-Connection: Keep-Alive
    Strict-Transport-Security: max-age=15724800; includeSubDomains;

    The output shows a 301 redirect and a Location header that suggest that http traffic is being redirected to https.

  2. Either cURL against the https url or add the -L option to automatically follow the location header. The -k option instructs cURL to not verify the SSL certificates.

    $ curl -ikL
    HTTP/1.1 301 Moved Permanently
    Via: 1.1 (McAfee Web Gateway
    Date: Thu, 07 Sep 2017 15:22:29 GMT
    Server: nginx/1.13.2
    Content-Type: text/html
    Content-Length: 185
    Proxy-Connection: Keep-Alive
    Strict-Transport-Security: max-age=15724800; includeSubDomains;
    HTTP/1.0 200 Connection established
    HTTP/1.1 200 OK
    Server: nginx/1.13.2
    Date: Thu, 07 Sep 2017 15:22:30 GMT
    Content-Type: text/html
    Content-Length: 71
    Connection: keep-alive
    Last-Modified: Thu, 07 Sep 2017 15:17:24 GMT
    ETag: "59b16304-47"
    Accept-Ranges: bytes
    Strict-Transport-Security: max-age=15724800; includeSubDomains;
    <h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>

    The last line of the output shows the HTML that is returned from the pod whose hostname is docker-hello-world-1732906117-0ztkm.

  3. Issue the cURL request several times to see the hostname in the HTML output change, demonstrating that load balancing is occurring:

    $ curl -k
    <h1>Hello webhook world from: docker-hello-world-1732906117-6115l</h1>
    $ curl -k
    <h1>Hello webhook world from: docker-hello-world-1732906117-7r89v</h1>
    $ curl -k
    <h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>

Inspecting nginx.conf

The nginx-ingress-controller ingress controller deployment manipulates the nginx.conf file in the pod within which it is running.

  1. Find the name of the pod running the nginx-ingress-controller ingress controller deployment and use it with a kubectl exec command to show the contents of nginx.conf.

    $ kubectl get po -n ingress-nginx
    NAME                                       READY     STATUS    RESTARTS   AGE
    nginx-ingress-controller-110676328-h86xg   1/1       Running   0          1h
    $ kubectl exec -n ingress-nginx -it nginx-ingress-controller-110676328-h86xg -- cat /etc/nginx/nginx.conf
  2. Look for proxy_pass in the output. There will be one for the default backend and another that looks similar to:

    proxy_pass http://upstream_balancer;

    This shows that Nginx is proxying requests to an upstream called upstream_balancer.

  3. Locate the upstream definition in the output. It will look similar to:

    upstream upstream_balancer {
                    server; # placeholder
                    balancer_by_lua_block {

    The upstream is proxying via Lua.