Oracle Cloud Infrastructure Documentation

Creating a Kubernetes Cluster

You can use Container Engine for Kubernetes to create new Kubernetes clusters. To create a cluster, you must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the CLUSTER_MANAGE permission.

You first specify basic details for the new cluster (the cluster name, and the Kubernetes version to install on master nodes). You can then create the cluster in one of two ways:

  • Using default settings to create a 'quick cluster' with new network resources as required. This approach is the fastest way to create a new cluster. If you accept all the default values, you can create a new cluster in just a few clicks. New network resources for the cluster are created automatically, along with a node pool and worker nodes. Note that worker nodes in a 'quick cluster' are created in private subnets, so a NAT gateway is also created (in addition to an internet gateway). To create a 'quick cluster', you must belong to a group to which a policy grants the necessary permissions to create the new network resources (see Create One or More Policies for Groups (Optional)).
  • Using custom settings to create a 'custom cluster'. This approach gives you the most control over the new cluster. You can explicitly define the new cluster's properties. And you can explicitly specify which existing network resources to use, including the existing public or private subnets in which to create worker nodes. Note that although you will usually define node pools immediately when defining a new 'custom cluster', you don't have to. You can create a 'custom cluster' with no node pools, and add node pools later.

Note

Note that Container Engine for Kubernetes does not yet support regional subnets, so you cannot currently select a regional subnet when specifying a load balancer subnet or a worker node subnet.

Regardless of how you create a cluster, Container Engine for Kubernetes gives names to worker nodes in the following format:

oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

where:

  • oke is the standard prefix for all worker nodes created by Container Engine for Kubernetes
  • c<part-of-cluster-OCID> is a portion of the cluster's OCID, prefixed with the letter c

  • n<part-of-node-pool-OCID> is a portion of the node pool's OCID, prefixed with the letter n
  • s<part-of-subnet-OCID> is a portion of the subnet's OCID, prefixed with the letter s
  • <slot> is an ordinal number of the node in the subnet (for example, 0, 1)

For example, if you specified a cluster is to have two nodes per subnet in a node pool, the two nodes might be named:

  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-0
  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-1

To ensure high availability, Container Engine for Kubernetes:

  • creates the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributing the master nodes across different availability domains in a region, where supported)
  • creates worker nodes in each of the fault domains in an availability domain (distributing the worker nodes as evenly as possible across the fault domains, subject to any other infrastructure restrictions)

Using the Console to create a 'Quick Cluster' with Default Settings

To create a 'quick cluster' with default settings and new network resources using Container Engine for Kubernetes:

  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Container Clusters.
  2. Choose a Compartment you have permission to work in, and in which you want to create both the new cluster and the associated network resources.

  3. On the Cluster List page, click Create Cluster.
  4. Either just accept the default configuration details for the new cluster, or specify alternatives as follows:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Kubernetes Version: The version of Kubernetes to run on the master nodes and worker nodes of the cluster. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version you select determines the default set of admission controllers that are turned on in the created cluster (the set follows the recommendation given in the Kubernetes documentation for that version).
  5. Select Quick Create to create a new cluster with default settings, along with new network resources for the new cluster.

    The Create Virtual Cloud Network panel shows the network resources that will be created for you by default.

    The Create Node Pool panel shows the fixed properties of the first node pool in the cluster that will be created for you:

    • the name of the node pool (always pool1)
    • the compartment in which the node pool will be created (always the same as the one in which the new network resources will reside)
    • the version of Kubernetes that will run on each worker node in the node pool (always the same as the version specified for the master nodes)
    • the image to use on each node in the node pool

    The Create Node Pool panel also contains some node pool properties that you can change, but which have been given sensible defaults.

  6. Either just accept all the default configuration details and skip ahead to the next step to create the cluster immediately, or specify alternatives as follows:

    1. Either accept the default configuration details for the node pool, or specify alternatives in the Create Node Pool panel as follows:
      • Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes.
      • Quantity per Subnet: The number of worker nodes to create for the node pool in each private subnet.
      • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that because worker nodes in a 'quick cluster' are in private subnets, you cannot use SSH to access them directly (see Connecting to Worker Nodes in Private Subnets Using SSH).
      • Kubernetes Labels: One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
    2. Either accept the defaults for the remaining cluster details, or specify alternatives in the Additional Add Ons panel as follows:

      • Kubernetes Dashboard Enabled: Select if you want to use the Kubernetes Dashboard to deploy and troubleshoot containerized applications, and to manage Kubernetes resources. See Starting the Kubernetes Dashboard.
      • Tiller (Helm) Enabled: Select if you want Tiller (the server portion of Helm) to run in the Kubernetes cluster. With Tiller running in the cluster, you can use Helm to manage Kubernetes resources.
    3. (Optional) Select View Detail Page After This Cluster Is Requested to return to the Cluster Details tab (rather than the Cluster List page) in the Console at the end of the cluster creation process.
  7. Click Create to create the new network resources and the new cluster.

    Container Engine for Kubernetes starts creating:

    • the network resources (such as the VCN, internet gateway, NAT gateway, route tables, security lists, private subnets), named oke-<resource-type>-quick-<cluster-name>-<creation-date>
    • the cluster, with the name you specified
    • the node pool, named pool1
    • worker nodes, with names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

    Note that if the cluster is not created successfully for some reason (for example, if you have insufficient permissions or if you've exceeded the cluster limit for the tenancy), any network resources created during the cluster creation process are not deleted automatically. You will have to manually delete any such unused network resources.

  8. Click Close to return to the Console.
  9. If you selected View Detail Page After This Cluster Is Requested, you return to the Cluster Details tab in the console. If you didn't select View Detail Page After This Cluster Is Requested, you return to the Cluster List page.

    Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

    Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl and the Kubernetes Dashboard.

Using the Console to create a 'Custom Cluster' with Explicitly Defined Settings

To create a 'custom cluster' with explicitly defined settings and existing network resources using Container Engine for Kubernetes:

  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Container Clusters.
  2. Choose a Compartment you have permission to work in, and in which you want to create the new cluster.
  3. On the Cluster List page, click Create Cluster.
  4. Specify configuration details for the new cluster:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Kubernetes Version: The version of Kubernetes to run on the master nodes of the cluster. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version you select determines the default set of admission controllers that are turned on in the created cluster (the set follows the recommendation given in the Kubernetes documentation for that version).
  5. Select Custom Create to create a new cluster by explicitly defining the new cluster's properties and which existing network resources to use.

  6. Specify the existing network resources to use for the new cluster in the Network Selection panel:

    • Network Compartment: The compartment in which the existing network resources reside.
    • VCN: The existing virtual cloud network that has been configured for cluster creation and deployment. See VCN Configuration.
    • Kubernetes Service LB Subnets: Optionally, the existing subnets that have been configured to host load balancers. Load balancer subnets must be different from worker node subnets, and can be public or private. You don't have to specify any load balancer subnets. However, if you do specify load balancer subnets, the number of load balancer subnets to specify depends on the region in which you are creating the cluster:

      • If you are creating a cluster in a region with three availability domains, you can specify zero or two load balancer subnets. If you specify two load balancer subnets, the two load balancer subnets must be in different availability domains.
      • If you are creating a cluster in a region with a single availability domain, you can specify zero or one load balancer subnet.

      Note that Container Engine for Kubernetes does not yet support regional subnets, so you cannot currently select a regional subnet when specifying a load balancer subnet.

      See Subnet Configuration.

    • Kubernetes Service CIDR Block: The available group of network addresses that can be exposed as Kubernetes services (ClusterIPs), expressed as a single, contiguous IPv4 CIDR block. For example, 10.96.0.0/16. The CIDR block you specify must not overlap with the CIDR block for the VCN. See CIDR Blocks and Container Engine for Kubernetes.
    • Pods CIDR Block: The available group of network addresses that can be allocated to pods running in the cluster, expressed as a single, contiguous IPv4 CIDR block. For example, 10.244.0.0/16. The CIDR block you specify must not overlap with the CIDR blocks for subnets in the VCN, and can be outside the VCN CIDR block. See CIDR Blocks and Container Engine for Kubernetes.
  7. Specify remaining details for the cluster in the Additional Add Ons panel:

    • Kubernetes Dashboard Enabled: Select if you want to use the Kubernetes Dashboard to deploy and troubleshoot containerized applications, and to manage Kubernetes resources. See Starting the Kubernetes Dashboard.
    • Tiller (Helm) Enabled: Select if you want Tiller (the server portion of Helm) to run in the Kubernetes cluster. With Tiller running in the cluster, you can use Helm to manage Kubernetes resources.
  8. Click Continue.
  9. (Optional) Specify configuration details for the first node pool in the cluster in the Node Pool panel:

    • Name: A name of your choice for the new node pool. Avoid entering confidential information.
    • Version: The version of Kubernetes to run on each worker node in the node pool. By default, the version of Kubernetes specified for the master nodes is selected. The Kubernetes version on worker nodes must be either the same version as that on the master nodes, or an earlier version that is still compatible. See Kubernetes Versions and Container Engine for Kubernetes.
    • Image: The image to use on each node in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node.
    • Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes.
    • Subnets: One or more subnets configured to host worker nodes. If you specified load balancer subnets, the worker node subnets must be different. The subnets you specify can be public or private. Note that Container Engine for Kubernetes does not yet support regional subnets, so you cannot currently select a regional subnet when specifying a worker node subnet. See Subnet Configuration.
    • Quantity per Subnet: The number of worker nodes to create for the node pool in each subnet.
    • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private Subnets Using SSH).
    • Kubernetes Labels: One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
  10. (Optional) Click Add node pool and specify configuration details for a second and subsequent node pools in the cluster.

    If you define multiple node pools in a cluster, you can host all of them on a single subnet. However, it's recommended best practice to host different node pools for a cluster on different subnets, one in each availability domain in the region.

  11. Click Review to confirm the resources that will be used and created.
  12. (Optional) Select View Detail Page After This Cluster Is Requested to return to the Cluster Details tab (rather than the Cluster List page) in the Console at the end of the cluster creation process.
  13. Click Create to create the new cluster.

    Container Engine for Kubernetes starts creating the cluster with the name you specified.

    If you specified details for one or more node pools, Container Engine for Kubernetes creates:

    • node pools with the names you specified
    • worker nodes with names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

    If you selected View Detail Page After This Cluster Is Requested, you return to the Cluster Details tab in the console. If you didn't select View Detail Page After This Cluster Is Requested, you return to the Cluster List page.

  14. Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

    Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl and the Kubernetes Dashboard.

Using the API

For information about using the API and signing requests, see REST APIs and Security Credentials. For information about SDKs, see Software Development Kits and Command Line Interface.

Use the CreateCluster operation to create a cluster.