Oracle Cloud Infrastructure Documentation

Creating a Kubernetes Cluster

You can use Container Engine for Kubernetes to create new Kubernetes clusters. To create a cluster, you must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the CLUSTER_MANAGE permission. In addition, a policy in the root compartment must grant Container Engine for Kubernetes access to all resources in the tenancy. See Policy Configuration for Cluster Creation and Deployment.

You first specify basic details for the new cluster (the cluster name, and the Kubernetes version to install on master nodes). You can then create the cluster in one of two ways:

  • Using default settings to create a 'quick cluster' with new network resources as required. This approach is the fastest way to create a new cluster. If you accept all the default values, you can create a new cluster in just a few clicks. New network resources for the 'quick cluster' are created automatically, including one regional subnet for worker nodes, and another regional subnet for load balancers. The regional subnet for load balancers will be public, but you can specify whether the regional subnet for worker nodes will be public or private. Note that if you specify a private regional subnet for worker nodes in the 'quick cluster', a NAT gateway is also created (in addition to an internet gateway). To create a 'quick cluster', you must belong to a group to which a policy grants the necessary permissions to create the new network resources (see Create One or More Additional Policies for Groups).
  • Using custom settings to create a 'custom cluster'. This approach gives you the most control over the new cluster. You can explicitly define the new cluster's properties. And you can explicitly specify which existing network resources to use, including the existing public or private subnets in which to create worker nodes and load balancers. The subnets can be regional subnets (recommended) or AD-specific subnets. Note that although you will usually define node pools immediately when defining a new 'custom cluster', you don't have to. You can create a 'custom cluster' with no node pools, and add node pools later.

Regardless of how you create a cluster, Container Engine for Kubernetes gives names to worker nodes in the following format:

oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

where:

  • oke is the standard prefix for all worker nodes created by Container Engine for Kubernetes
  • c<part-of-cluster-OCID> is a portion of the cluster's OCID, prefixed with the letter c

  • n<part-of-node-pool-OCID> is a portion of the node pool's OCID, prefixed with the letter n
  • s<part-of-subnet-OCID> is a portion of the subnet's OCID, prefixed with the letter s
  • <slot> is an ordinal number of the node in the subnet (for example, 0, 1)

For example, if you specified a cluster is to have two nodes in a node pool, the two nodes might be named:

  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-0
  • oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-1

Do not change the auto-generated names that Container Engine for Kubernetes gives to worker nodes.

To ensure high availability, Container Engine for Kubernetes:

  • creates the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributing the master nodes across different availability domains in a region, where supported)
  • creates worker nodes in each of the fault domains in an availability domain (distributing the worker nodes as evenly as possible across the fault domains, subject to any other infrastructure restrictions)

Using the Console to create a 'Quick Cluster' with Default Settings

To create a 'quick cluster' with default settings and new network resources using Container Engine for Kubernetes:

  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Container Clusters.
  2. Choose a Compartment you have permission to work in.

  3. On the Cluster List page, click Create Cluster.
  4. In the Create Cluster Solution dialog, select Quick Create and click Launch Workflow.
  5. On the Create Cluster page, either just accept the default configuration details for the new cluster, or specify alternatives as follows:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Compartment: The compartment in which to create the new cluster and the associated network resources.
    • Kubernetes Version: The version of Kubernetes to run on the master nodes and worker nodes of the cluster. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version you select determines the default set of admission controllers that are turned on in the created cluster (the set follows the recommendation given in the Kubernetes documentation for that version).
    • Visibility Type: Whether to create a private or a public regional subnet to host worker nodes (note that a public regional subnet is always created to host load balancers in a 'quick cluster', regardless of your selection here):

      • Private: Select to create a private regional subnet to host worker nodes (along with the public regional subnet to host load balancers).
      • Public: Select to create a public regional subnet to host worker nodes (along with the public regional subnet to host load balancers).
    • Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes.
    • Number of Nodes: The number of worker nodes to create in the node pool, placed in the regional subnet created for the 'quick cluster'. The nodes are distributed as evenly as possible across the availability domains in a region (or in the case of a region with a single availability domain, across the fault domains in that availability domain).
  6. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify alternatives as follows:

    • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that if you specify that you want the worker nodes in the 'quick cluster' to be hosted in a private regional subnet, you cannot use SSH to access them directly (see Connecting to Worker Nodes in Private Subnets Using SSH).
    • Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
  7. Either accept the defaults for add-ons, or specify alternatives in the Add Ons:

    • Kubernetes Dashboard Enabled: Select if you want to use the Kubernetes Dashboard to deploy and troubleshoot containerized applications, and to manage Kubernetes resources. See Starting the Kubernetes Dashboard.
    • Tiller (Helm) Enabled: Select if you want Tiller (the server portion of Helm) to run in the Kubernetes cluster. With Tiller running in the cluster, you can use Helm to manage Kubernetes resources.

  8. Click Next to review the details you entered for the new cluster.
  9. Click Submit to create the new network resources and the new cluster.

    Container Engine for Kubernetes starts creating resources (as shown in the Creating cluster and associated network resources dialog):

    • the network resources (such as the VCN, internet gateway, NAT gateway, route tables, security lists, a regional subnet for worker nodes and another regional subnet for load balancers), with auto-generated names in the format oke-<resource-type>-quick-<cluster-name>-<creation-date>
    • the cluster, with the name you specified
    • the node pool, named pool1
    • worker nodes, with auto-generated names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

    Do not change the resource names that Container Engine for Kubernetes has auto-generated. Note that if the cluster is not created successfully for some reason (for example, if you have insufficient permissions or if you've exceeded the cluster limit for the tenancy), any network resources created during the cluster creation process are not deleted automatically. You will have to manually delete any such unused network resources.

  10. Click Close to return to the Console.

Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl and the Kubernetes Dashboard.

Using the Console to create a 'Custom Cluster' with Explicitly Defined Settings

To create a 'custom cluster' with explicitly defined settings and existing network resources using Container Engine for Kubernetes:

  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Container Clusters.
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click Create Cluster.
  4. In the Create Cluster Solution dialog, select Custom Create and click Launch Workflow.
  5. On the Create Cluster page, either just accept the default configuration details for the new cluster, or specify alternatives as follows:

    • Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid entering confidential information.
    • Compartment: The compartment in which to create the new cluster.
    • Kubernetes Version: The version of Kubernetes to run on the master nodes and worker nodes of the cluster. Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes version you select determines the default set of admission controllers that are turned on in the created cluster (the set follows the recommendation given in the Kubernetes documentation for that version).
  6. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify whether to encrypt Kubernetes secrets at rest in the etcd key-value store for the cluster using the Key Management service:
    • No Encryption: Kubernetes secrets at rest in the etcd key-value store are not encrypted.
    • Encrypt Using Customer-Managed Keys: Encrypt Kubernetes secrets in the etcd key-value store and specify:

      • Choose a Vault in <compartment-name>: The vault that contains the master encryption key, from the list of vaults in the specified compartment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment.
      • Choose a Key in <compartment-name>: The name of the master encryption key, from the list of keys in the specified compartment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment. Note that you cannot change the master encryption key after the cluster has been created.
    • Note that if you do want to use encryption, a suitable master encryption key, dynamic group, and policy must already exist before you can create the cluster. For more information, see Encrypting Kubernetes Secrets at Rest in Etcd.

  7. Either accept the defaults for add-ons, or specify alternatives:

    • Kubernetes Dashboard Enabled: Select if you want to use the Kubernetes Dashboard to deploy and troubleshoot containerized applications, and to manage Kubernetes resources. See Starting the Kubernetes Dashboard.
    • Tiller (Helm) Enabled: Select if you want Tiller (the server portion of Helm) to run in the Kubernetes cluster. With Tiller running in the cluster, you can use Helm to manage Kubernetes resources.

  8. Click Next and specify the existing network resources to use for the new cluster on the Network Setup page:

    • VCN in <compartment-name>: The existing virtual cloud network that has been configured for cluster creation and deployment. By default, <compartment-name> is the compartment in which you are creating the cluster, but you can select a different compartment by clicking Change Compartment. See VCN Configuration.
    • Kubernetes Service LB Subnets: Optionally, the existing subnets that have been configured to host load balancers. Load balancer subnets must be different from worker node subnets, can be public or private, and can be regional (recommended) or AD-specific. You don't have to specify any load balancer subnets. However, if you do specify load balancer subnets, the number of load balancer subnets to specify depends on the region in which you are creating the cluster and whether the subnets are regional or AD-specific.

      If you are creating a cluster in a region with three availability domains, you can specify:

      • Zero or one load balancer regional subnet (recommended).
      • Zero or two load balancer AD-specific subnets. If you specify two AD-specific subnets, the two subnets must be in different availability domains.

      If you are creating a cluster in a region with a single availability domain, you can specify:

      • Zero or one load balancer regional subnet (recommended).
      • Zero or one load balancer AD-specific subnet.

      See Subnet Configuration.

  9. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify alternatives as follows:

    • Kubernetes Service CIDR Block: The available group of network addresses that can be exposed as Kubernetes services (ClusterIPs), expressed as a single, contiguous IPv4 CIDR block. For example, 10.96.0.0/16. The CIDR block you specify must not overlap with the CIDR block for the VCN. See CIDR Blocks and Container Engine for Kubernetes.
    • Pods CIDR Block: The available group of network addresses that can be allocated to pods running in the cluster, expressed as a single, contiguous IPv4 CIDR block. For example, 10.244.0.0/16. The CIDR block you specify must not overlap with the CIDR blocks for subnets in the VCN, and can be outside the VCN CIDR block. See CIDR Blocks and Container Engine for Kubernetes.
  10. Click Next and optionally specify configuration details for the first node pool in the cluster on the Node Pools page:

    • Name: A name of your choice for the new node pool. Avoid entering confidential information.
    • Version: The version of Kubernetes to run on each worker node in the node pool. By default, the version of Kubernetes specified for the master nodes is selected. The Kubernetes version on worker nodes must be either the same version as that on the master nodes, or an earlier version that is still compatible. See Kubernetes Versions and Container Engine for Kubernetes.
    • Image: The image to use on each node in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node.
    • Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes.
    • Number of Nodes: The number of worker nodes to create in the node pool, placed in the availability domains you select, and in the regional subnet (recommended) or AD-specific subnet you specify for each availability domain.
    • Availability Domain 1:
      • Availability Domain: An availability domain in which to place worker nodes.
      • Subnet: A regional subnet (recommended) or AD-specific subnet configured to host worker nodes. If you specified load balancer subnets, the worker node subnets must be different. The subnets you specify can be public or private, and can be regional (recommended) or AD-specific. See Subnet Configuration.

      Optionally click Add Availability Domain to select additional domains and subnets in which to place worker nodes.

      When they are created, the worker nodes are distributed as evenly as possible across the availability domains you select (or in the case of a single availability domain, across the fault domains in that availability domain).

    • Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private Subnets Using SSH).
    • Kubernetes Labels: One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools.
  11. (Optional) Click Another node pool and specify configuration details for a second and subsequent node pools in the cluster.

    If you define multiple node pools in a cluster, you can host all of them on a single AD-specific subnet. However, it's best practice to host different node pools for a cluster on a regional subnet (recommended) or on different AD-specific subnets (one in each availability domain in the region).

  12. Click Next to review the details you entered for the new cluster.
  13. Click Create Cluster to create the new cluster.

    Container Engine for Kubernetes starts creating the cluster with the name you specified.

    If you specified details for one or more node pools, Container Engine for Kubernetes creates:

    • node pools with the names you specified
    • worker nodes with auto-generated names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>

    Do not change the auto-generated names of worker nodes.

  14. Click Close to return to the Console.

Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.

Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the cluster using kubectl and the Kubernetes Dashboard.

Using the API

For information about using the API and signing requests, see REST APIs and Security Credentials. For information about SDKs, see Software Development Kits and Command Line Interface.

Use the CreateCluster operation to create a cluster.