To perform operations on a Kubernetes cluster, you must have appropriate permissions to access the cluster.
For most operations on Kubernetes clusters created and managed by Container Engine for Kubernetes, Oracle Cloud Infrastructure Identity and Access Management (IAM) provides access control. A user's permissions to access clusters comes from the groups to which they belong. The permissions for a group are defined by policies. Policies define what actions members of a group can perform, and in which compartments. Users can then access clusters and perform operations based on the policies set for the groups they are members of.
IAM provides control over:
- whether a user can create or delete clusters
- whether a user can add, remove, or modify node pools
- which Kubernetes object create/delete/view operations a user can perform on all clusters within a compartment or tenancy
In addition to IAM, the Kubernetes RBAC Authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and clusterroles. A Kubernetes RBAC role is a collection of permissions. For example, a role might include read permission on pods and list permission for pods. A Kubernetes RBAC clusterrole is just like a role, but can be used anywhere in the cluster. A Kubernetes RBAC rolebinding maps a role to a user or set of users, granting that role's permissions to those users for resources in that namespace. Similarly, a Kubernetes RBAC clusterrolebinding maps a clusterrole to a user or set of users, granting that clusterrole's permissions to those users across the entire cluster.
IAM and the Kubernetes RBAC Authorizer work together to enable users who have been successfully authorized by at least one of them to complete the requested Kubernetes operation.
When a user attempts to perform any operation on a cluster (except for create role and create clusterrole operations), IAM first determines whether the group to which the user belongs has the appropriate and sufficient permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes RBAC role or clusterrole, the Kubernetes RBAC Authorizer then determines whether the user has been granted the appropriate Kubernetes role or clusterrole.
Typically, you’ll want to define your own Kubernetes RBAC roles and clusterroles when deploying a Kubernetes cluster to provide additional fine-grained control. When you attempt to perform a create role or create clusterrole operation, the Kubernetes RBAC Authorizer first determines whether you have sufficient Kubernetes privileges. To create a role or clusterrole, you must have been assigned an existing Kubernetes RBAC role (or clusterrole) that has at least the same or higher privileges as the new role (or clusterrole) you’re attempting to create.
By default, users are not assigned any Kubernetes RBAC roles (or clusterroles) by default. So before attempting to create a new role (or clusterrole), you must be assigned an appropriately privileged role (or clusterrole). A number of such roles and clusterroles are always created by default, including the cluster-admin clusterrole (for a full list, see Default Roles and Role Bindings in the Kubernetes documentation). The cluster-admin clusterrole essentially confers super-user privileges. A user granted the cluster-admin clusterrole can perform any operation across all namespaces in a given cluster.
Note that Oracle Cloud Infrastructure tenancy administrators already have sufficient privileges, and do not require the cluster-admin clusterrole.
The following instructions assume:
- You have the required access to create Kubernetes RBAC roles and clusterroles, either because you're in the tenancy's Administrators group, or because you have the Kubernetes RBAC cluster-admin clusterrole.
- The user to which you want to grant the RBAC cluster-admin clusterrole is not an OCI tenancy administrator. If they are an OCI tenancy administrator, they do not require the Kubernetes RBAC cluster-admin clusterrole.
Follow these steps to grant a user who is not a tenancy administrator the Kubernetes RBAC cluster-admin clusterrole on a cluster deployed on Oracle Cloud Infrastructure:
If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access.
In a terminal window, grant the Kubernetes RBAC cluster-admin clusterrole to the user by entering:
$ kubectl create clusterrolebinding <my-cluster-admin-binding> --clusterrole=cluster-admin --user=<user_OCID>
<my-cluster-admin-binding>is a string of your choice to be used as the name for the binding between the user and the Kubernetes RBAC cluster-admin clusterrole. For example,
<user_OCID>is the user's OCID (obtained from the Console ). For example,
ocid1.user.oc1..aaaaa...zutq(abbreviated for readability).
$ kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq
The following instructions assume you're in the tenancy's Administrators group, and therefore have:
- the required permissions to create clusters, and to manage users and groups
- the required access to create Kubernetes RBAC roles and clusterroles
Follow these steps to give a developer the necessary Oracle Cloud Infrastructure and Kubernetes RBAC permissions to use kubectl to view pods running on a cluster deployed on Oracle Cloud Infrastructure:
- Create a new Oracle Cloud Infrastructure user for the developer to use (for example, called firstname.lastname@example.org), and make a note of the new user's OCID (for example,
ocid1.user.oc1..aaaaa...tx5a, abbreviated for readability). See To create a user.
- Create a new Oracle Cloud Infrastructure group and add the new user to the group (for example, called acme-dev-pod-vwr). See To create a group.
Create a new Oracle Cloud Infrastructure policy that grants the new group the CLUSTER_USE permission on clusters, with a policy statement like:
Allow group acme-dev-pod-vwr to use clusters in <location>
In the above policy statement, replace
tenancy(if you are creating the policy in the tenancy's root compartment) or
compartment <compartment-name>(if you are creating the policy in an individual compartment).
See To create a policy.
- Create a new cluster in the Console. See Creating a Kubernetes Cluster.
- Follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access.
In a text editor, create a file (for example, called role-pod-reader.yaml) with the following content. This file defines a Kubernetes RBAC role that enables users to read pod details.
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]
In a terminal window, create the new role in the cluster using kubectl. For example, if you gave the yaml file that defines the new role the name role-pod-reader.yaml, enter the following:
$ kubectl create -f role-pod-reader.yaml
In a terminal window, bind the Kubernetes RBAC role you just created to the Oracle Cloud Infrastructure user account you created earlier by entering the following to create a new rolebinding (in this case, called pod-reader-binding):
$ kubectl create rolebinding pod-reader-binding --role=pod-reader --user=ocid1.user.oc1..aaaaa...tx5a
Give the developer the credentials of the new Oracle Cloud Infrastructure user you created earlier, and tell the developer they can now see details of pods running on the cluster deployed on Oracle Cloud Infrastructure by:
- Signing in to the Console using the new user's credentials.
- Following the instructions in Setting Up Cluster Access to set up their own copy of the cluster's kubeconfig file. If the file does not have the expected default name and location of
$HOME/.kube/config, the developer will also have to set the KUBECONFIG environment variable to point to the file. Note that the developer must set up their own kubeconfig file. They cannot access a cluster using a kubeconfig file that you (or a different user) set up.
Using kubectl to see details of the pods by entering:
$ kubectl get pods