Setting Up Cluster Access

To access a cluster using kubectl, you have to set up a Kubernetes configuration file (commonly known as a 'kubeconfig' file) for the cluster. The kubeconfig file (by default named config and stored in the $HOME/.kube directory) provides the necessary details to access the cluster. Having set up the kubeconfig file, you can start using kubectl to manage the cluster.

The steps to follow when setting up the kubeconfig file depend on how you want to access the cluster:

  • To access the cluster using kubectl in Cloud Shell, run an Oracle Cloud Infrastructure CLI command in the Cloud Shell window to set up the kubeconfig file.
  • To access the cluster using a local installation of kubectl:

    • Generate an API signing key pair (if you don't already have one).
    • Upload the public key of the API signing key pair.
    • Install and configure the Oracle Cloud Infrastructure CLI.
    • Set up the kubeconfig file.

    See Setting Up Local Access to Clusters.

Setting Up Cloud Shell Access to Clusters

To set up a kubeconfig file to enable access to a cluster using kubectl in Cloud Shell:

Step 1: Set up the kubeconfig file
  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Kubernetes Clusters.
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click the name of the cluster you want to access using kubectl. The Cluster page shows details of the cluster.
  4. Click the Access Cluster button to display the Access Your Cluster dialog box.
  5. Click Cloud Shell Access.
  6. Click Launch Cloud Shell to display the Cloud Shell window.
  7. Run the Oracle Cloud Infrastructure CLI command to set up the kubeconfig file and save it in a location accessible to kubectl.

    For example, enter the following command (or copy and paste it from the Access Your Cluster dialog box) in the Cloud Shell window:

    $ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaaaaae... --file $HOME/.kube/config  --region us-phoenix-1 --token-version 2.0.0

    where ocid1.cluster.oc1.phx.aaaaaaaaae... is the OCID of the current cluster. For convenience, the command in the Access Your Cluster dialog box already includes the cluster's OCID.

    Note that if a kubeconfig file already exists in the location you specify, details about the cluster will be added as a new context to the existing kubeconfig file. The current-context: element in the kubeconfig file will be set to point to the newly-added context.

    Tip

    For clipboard operations in the Cloud Shell window, Windows users can use Ctrl-C or Ctrl-Insert to copy, and Shift-Insert to paste. For Mac OS users, use Cmd-C to copy and Cmd-V to paste.
  8. If you don't save the kubeconfig file in the default location ($HOME/.kube) or with the default name (config), set the value of the KUBECONFIG environment variable to point to the name and location of the kubeconfig file. For example, enter the following command in the Cloud Shell window:

    $ export KUBECONFIG=$HOME/.kube/config
Step 2: Verify that kubectl can access the cluster

Verify that kubectl can connect to the cluster by entering the following command in the Cloud Shell window:

$ kubectl get nodes

Information about the nodes in the cluster is shown.

You can now use kubectl to perform operations on the cluster.

Setting Up Local Access to Clusters

To set up a kubeconfig file to enable access to a cluster using a local installation of kubectl:

Step 1: Generate an API signing key pair

If you already have an API signing key pair, go straight to the next step. If not:

  1. Use OpenSSL commands to generate the key pair in the required PEM format. If you're using Windows, you'll need to install Git Bash for Windows and run the commands with that tool. See How to Generate an API Signing Key.
  2. Copy the contents of the public key to the clipboard (you'll need to paste the value into the Console later).

Step 2: Upload the public key of the API signing key pair
  1. In the top-right corner of the Console, open the Profile menu (User menu icon) and then click User Settings to view the details.

  2. Click Add Public Key.

  3. Paste the public key's value into the window and click Add.

    The key is uploaded and its fingerprint is displayed (for example, d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13).

Step 3: Install and configure the Oracle Cloud Infrastructure CLI
  1. Install the Oracle Cloud Infrastructure CLI version 2.6.4 (or later). See Quickstart.

  2. Configure the Oracle Cloud Infrastructure CLI. See Configuring the CLI.
Step 4: Set up the kubeconfig file
  1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Kubernetes Clusters.
  2. Choose a Compartment you have permission to work in.
  3. On the Cluster List page, click the name of the cluster you want to access using kubectl. The Cluster page shows details of the cluster.
  4. Click the Access Cluster button to display the Access Your Cluster dialog box.

  5. Click Local Access.
  6. Create a directory to contain the kubeconfig file. By default, the expected directory name is $HOME/.kube.

    For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog box) in a local terminal window:

    $ mkdir -p $HOME/.kube
  7. Run the Oracle Cloud Infrastructure CLI command to set up the kubeconfig file and save it in a location accessible to kubectl.

    For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog box) in a local terminal window:

    $ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaaaaae... --file $HOME/.kube/config  --region us-phoenix-1 --token-version 2.0.0

    where ocid1.cluster.oc1.phx.aaaaaaaaae... is the OCID of the current cluster. For convenience, the command in the Access Your Cluster dialog box already includes the cluster's OCID.

    Note that if a kubeconfig file already exists in the location you specify, details about the cluster will be added as a new context to the existing kubeconfig file. The current-context: element in the kubeconfig file will be set to point to the newly-added context.

  8. If you don't save the kubeconfig file in the default location ($HOME/.kube) or with the default name (config), set the value of the KUBECONFIG environment variable to point to the name and location of the kubeconfig file. For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog box) in a local terminal window:

    $ export KUBECONFIG=$HOME/.kube/config
Step 5: Verify that kubectl can access the cluster
  1. Verify that kubectl is available by entering the following command in a local terminal window:

    $ kubectl version

    The response shows:

    • the version of kubectl installed and running locally
    • the version of Kubernetes (strictly speaking, the version of the kube-apiserver) running on the cluster's master node

    Note that the kubectl version must be within one minor version (older or newer) of the Kubernetes version running on the master node. If kubectl is more than one minor version older or newer, install an appropriate version of kubectl. See Kubernetes version and version skew support policy in the Kubernetes documentation.

    If the command returns an error indicating that kubectl is not available, install kubectl (see the kubectl documentation), and repeat this step.

  2. Verify that kubectl can connect to the cluster by entering the following command in a local terminal window:

    $ kubectl get nodes

    Information about the nodes in the cluster is shown.

    You can now use kubectl to perform operations on the cluster.

Notes about Kubeconfig Files

Note the following about kubeconfig files:

  • A single kubeconfig file can include the details for multiple clusters, as multiple contexts. The cluster on which operations will be performed is specified by the current-context: element in the kubeconfig file.
  • A kubeconfig file includes an Oracle Cloud Infrastructure CLI command that dynamically generates an authentication token and inserts it when you run a kubectl command. The Oracle Cloud Infrastructure CLI must be available on your shell's executable path (for example, $PATH on Linux).
  • The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file are short-lived, cluster-scoped, and specific to individual users. As a result, you cannot share kubeconfig files between users to access Kubernetes clusters.
  • The Oracle Cloud Infrastructure CLI command in the kubeconfig file uses your current CLI profile when generating an authentication token. If you have defined multiple profiles in different tenancies in the CLI configuration file (for example, in ~/.oci/config), specify which profile to use when generating the authentication token as follows. In both cases, <profile-name> is the name of the profile defined in the CLI configuration file:

    • Add --profile to the args: section of the kubeconfig file as follows:

      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1beta1
          args:
          - ce
          - cluster
          - generate-token
          - --cluster-id
          - <cluster ocid>
          - --profile
          - <profile-name>
          command: oci
          env: []
    • Set the OCI_CLI_PROFILE environment variable to the name of the profile defined in the CLI configuration file before running kubectl commands. For example:

      
      $ export OCI_CLI_PROFILE=<profile-name>
      $ kubectl get nodes
      
  • The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file are appropriate to authenticate individual users accessing the cluster using kubectl. However, the generated authentication tokens are unsuitable if you want other processes and tools to access the cluster, such as continuous integration and continuous delivery (CI/CD) tools. In this case, consider creating a Kubernetes service account and adding its associated authentication token to the kubeconfig file. For more information, see Adding a Service Account Authentication Token to a Kubeconfig File.

Upgrading Kubeconfig Files from Version 1.0.0 to Version 2.0.0

Container Engine for Kubernetes currently supports kubeconfig version 2.0.0 files, and no longer supports kubeconfig version 1.0.0 files.

Enhancements in kubeconfig version 2.0.0 files provide security improvements for your Kubernetes environment, including short-lived cluster-scoped tokens with automated refreshing, and support for instance principals to access Kubernetes clusters. Additionally, authentication tokens are generated on-demand for each cluster, so kubeconfig version 2.0.0 files cannot be shared between users to access Kubernetes clusters (unlike kubeconfig version 1.0.0 files).

Note that kubeconfig version 2.0.0 files are not compatible with kubectl versions prior to version 1.11.9. If you are currently running kubectl version 1.10.x or older, upgrade kubectl to version 1.11.9 or later. For more information about compatibility between different versions of kubernetes and kubectl, see the Kubernetes documentation.

Follow the instructions below to determine the current version of kubeconfig files, and how to upgrade any remaining kubeconfig version 1.0.0 files to version 2.0.0.

Determine the kubeconfig file version

To determine the version of a cluster's kubeconfig file:

1. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter the following command to see the format of the kubeconfig file currently pointed at by the KUBECONFIG environment variable:

$ kubectl config view

2. If the kubeconfig file is version 1.0.0, you see a response in the following format:

users:
- name: <username>
  user:
    token: <token-value>

If you see a response in the above format, you have to upgrade the kubeconfig file. See Upgrading Kubeconfig Files from Version 1.0.0 to Version 2.0.0.

3. If the kubeconfig file is version 2.0.0, you see a response in the following format:

user:
  exec:
    apiVersion: client.authentication.k8s.io/v1beta1
    args:
    - ce
    - cluster
    - generate-token
    - --cluster-id
    - <cluster ocid>
    command: oci
    env: []

If you see a response in the above format, no further action is required.

Upgrade a kubeconfig version 1.0.0 file to version 2.0.0

To upgrade a kubeconfig version 1.0.0 file:

  1. In the case of a local installation of kubectl, confirm that Oracle Cloud Infrastructure CLI version 2.6.4 (or later) is installed by entering:

    oci -version

    If the Oracle Cloud Infrastructure CLI version is earlier than version 2.6.4, upgrade the CLI to a later version. See Upgrading the CLI.

  2. Follow the appropriate instructions to set up the kubeconfig file for use in Cloud Shell or locally (see Setting Up Cloud Shell Access to Clusters or Setting Up Local Access to Clusters). Running the oci ce cluster create-kubeconfig command shown in the Access Your Cluster dialog box upgrades the existing kubeconfig version 1.0.0 file. If you change the name or location of the kubeconfig file, set the KUBECONFIG environment variable to point to the new name and location of the file.

  3. Confirm the kubeconfig file is now version 2.0.0:
    1. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter:

      $ kubectl config view
    2. Confirm that that the response is in the following format:

      user:
        exec:
          apiVersion: client.authentication.k8s.io/v1beta1
          args:
          - ce
          - cluster
          - generate-token
          - --cluster-id
          - <cluster ocid>
          command: oci
          env: []