Setting Up Cluster Access

Find out about the steps to set up access to the clusters you create using Kubernetes Engine (OKE). Having completed the steps, you can start using kubectl to manage the cluster.

To access a cluster using kubectl, you have to set up a Kubernetes configuration file (commonly known as a 'kubeconfig' file) for the cluster. The kubeconfig file (by default named config and stored in the $HOME/.kube directory) provides the necessary details to access the cluster. Having set up the kubeconfig file, you can start using kubectl to manage the cluster.

The steps to follow when setting up the kubeconfig file depend on how you want to access the cluster:

  • To access the cluster using kubectl in Cloud Shell, run an Oracle Cloud Infrastructure CLI command in the Cloud Shell window to set up the kubeconfig file.

    See Setting Up Cloud Shell Access to Clusters.

  • To access the cluster using a local installation of kubectl:

    • Generate an API signing key pair (if you don't already have one).
    • Upload the public key of the API signing key pair.
    • Install and configure the Oracle Cloud Infrastructure CLI.
    • Set up the kubeconfig file.

    See Setting Up Local Access to Clusters.

Setting Up Cloud Shell Access to Clusters

When a cluster's Kubernetes API endpoint has a public IP address, you can access the cluster in Cloud Shell by setting up a kubeconfig file.

Note

To access a cluster with a private Kubernetes API endpoint in Cloud Shell, you can configure a bastion using the Oracle Cloud Infrastructure Bastion service. For more information, see Setting Up a Bastion for Cluster Access.

To set up the kubeconfig file:

Setting Up Local Access to Clusters

When a cluster's Kubernetes API endpoint does not have a public IP address, you can access the cluster from a local terminal if your network is peered with the cluster's VCN.

Note

To access a cluster with a private Kubernetes API endpoint from a local terminal, you can also configure a bastion using the Oracle Cloud Infrastructure Bastion service. For more information, see Setting Up a Bastion for Cluster Access.

To set up the kubeconfig file:

Notes about Kubeconfig Files

Note the following about kubeconfig files:

  • A single kubeconfig file can include the details for multiple clusters, as multiple contexts. The cluster on which operations will be performed is specified by the current-context: element in the kubeconfig file.
  • A kubeconfig file includes an Oracle Cloud Infrastructure CLI command that dynamically generates an authentication token and inserts it when you run a kubectl command. The Oracle Cloud Infrastructure CLI must be available on your shell's executable path (for example, $PATH on Linux).
  • The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file are short-lived, cluster-scoped, and specific to individual users. As a result, you cannot share kubeconfig files between users to access Kubernetes clusters.
  • The Oracle Cloud Infrastructure CLI command in the kubeconfig file uses your current CLI profile when generating an authentication token. If you have defined multiple profiles in different tenancies in the CLI configuration file (for example, in ~/.oci/config), specify which profile to use when generating the authentication token as follows. In both cases, <profile-name> is the name of the profile defined in the CLI configuration file:

    • Add --profile to the args: section of the kubeconfig file as follows:

      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1beta1
          args:
          - ce
          - cluster
          - generate-token
          - --cluster-id
          - <cluster ocid>
          - --profile
          - <profile-name>
          command: oci
          env: []
    • Set the OCI_CLI_PROFILE environment variable to the name of the profile defined in the CLI configuration file before running kubectl commands. For example:

      
      export OCI_CLI_PROFILE=<profile-name>
      
      kubectl get nodes
      
  • The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file are appropriate to authenticate individual users accessing the cluster using kubectl. However, the generated authentication tokens are unsuitable if you want other processes and tools to access the cluster, such as continuous integration and continuous delivery (CI/CD) tools. In this case, consider creating a Kubernetes service account and adding its associated authentication token to the kubeconfig file. For more information, see Adding a Service Account Authentication Token to a Kubeconfig File.
  • An IAM policy might have been defined to restrict cluster access to only users that have been verified with multi-factor authentication (MFA). If such a policy exists, you have to add --profile and --auth arguments to the kubeconfig file to enable an MFA-verified user to access the cluster using kubectl, as follows. In both cases, <profile-name> is the name of the MFA-verified user's profile defined in the Oracle Cloud Infrastructure CLI configuration file:

    • Add the following arguments to the args: section of the kubeconfig file:

      
          - --profile
          - <profile-name>
          - --auth
          - security_token

      For example:

      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1beta1
          args:
          - ce
          - cluster
          - generate-token
          - --cluster-id
          - <cluster ocid>
          - --profile
          - <profile-name>
          - --auth
          - security_token
          command: oci
          env: []
    • Set the OCI_CLI_PROFILE environment variable to the name of the MFA-verified user's profile defined in the CLI configuration file before running kubectl commands. For example:

      
      export OCI_CLI_PROFILE=<profile-name>
      
      kubectl get nodes
      

    After updating the kubeconfig file, the user you use to access the cluster must be MFA-verified. If you attempt to access the cluster using a user that has not been MFA-verified, the message error: You must be logged in to the server (Unauthorized) is displayed.

    For more information about MFA-verified users, see Managing Multifactor Authentication.

Upgrading Kubeconfig Files from Version 1.0.0 to Version 2.0.0

Kubernetes Engine currently supports kubeconfig version 2.0.0 files, and no longer supports kubeconfig version 1.0.0 files.

Enhancements in kubeconfig version 2.0.0 files provide security improvements for your Kubernetes environment, including short-lived cluster-scoped tokens with automated refreshing, and support for instance principals to access Kubernetes clusters. Additionally, authentication tokens are generated on-demand for each cluster, so kubeconfig version 2.0.0 files cannot be shared between users to access Kubernetes clusters (unlike kubeconfig version 1.0.0 files).

Note that kubeconfig version 2.0.0 files are not compatible with kubectl versions prior to version 1.11.9. If you are currently running kubectl version 1.10.x or older, upgrade kubectl to version 1.11.9 or later. For more information about compatibility between different versions of kubernetes and kubectl, see the Kubernetes documentation.

Follow the instructions below to determine the current version of kubeconfig files, and how to upgrade any remaining kubeconfig version 1.0.0 files to version 2.0.0.

Determine the kubeconfig file version

To determine the version of a cluster's kubeconfig file:

1. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter the following command to see the format of the kubeconfig file currently pointed at by the KUBECONFIG environment variable:

kubectl config view

2. If the kubeconfig file is version 1.0.0, you see a response in the following format:

users:
- name: <username>
  user:
    token: <token-value>

If you see a response in the above format, you have to upgrade the kubeconfig file. See Upgrading Kubeconfig Files from Version 1.0.0 to Version 2.0.0.

3. If the kubeconfig file is version 2.0.0, you see a response in the following format:

user:
  exec:
    apiVersion: client.authentication.k8s.io/v1beta1
    args:
    - ce
    - cluster
    - generate-token
    - --cluster-id
    - <cluster ocid>
    command: oci
    env: []

If you see a response in the above format, no further action is required.

Upgrade a kubeconfig version 1.0.0 file to version 2.0.0

To upgrade a kubeconfig version 1.0.0 file:

  1. In the case of a local installation of kubectl, confirm that Oracle Cloud Infrastructure CLI version 2.6.4 (or later) is installed by entering:

    oci -version

    If the Oracle Cloud Infrastructure CLI version is earlier than version 2.6.4, upgrade the CLI to a later version. See Upgrading the CLI.

  2. Follow the appropriate instructions to set up the kubeconfig file for use in Cloud Shell or locally (see Setting Up Cloud Shell Access to Clusters or Setting Up Local Access to Clusters). Running the oci ce cluster create-kubeconfig command shown in the Access Your Cluster dialog box upgrades the existing kubeconfig version 1.0.0 file. If you change the name or location of the kubeconfig file, set the KUBECONFIG environment variable to point to the new name and location of the file.

  3. Confirm the kubeconfig file is now version 2.0.0:
    1. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter:

      kubectl config view
    2. Confirm that that the response is in the following format:

      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1beta1
          args:
          - ce
          - cluster
          - generate-token
          - --cluster-id
          - <cluster ocid>
          command: oci
          env: []