Upgrading Clusters to Newer Kubernetes Versions
After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can upgrade the Kubernetes version running on master nodes and worker nodes in a cluster.
The master nodes and worker nodes that comprise the cluster can run different versions of Kubernetes, provided you follow the Kubernetes version skew support policy described in the Kubernetes documentation.
You upgrade master nodes and worker nodes differently:
You upgrade master nodes by upgrading the cluster and specifying a more recent Kubernetes version for the cluster. Master nodes running older versions of Kubernetes are upgraded. Because Container Engine for Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed master nodes to ensure high availability (distributed across different availability domains in a region where supported), you're able to upgrade the Kubernetes version running on master nodes with zero downtime.
Having upgraded master nodes to a new version of Kubernetes, you can subsequently create new node pools with worker nodes running the newer version. Alternatively, you can continue to create new node pools with worker nodes running older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the master nodes).
For more information about master node upgrade, see Upgrading the Kubernetes Version on Master Nodes in a Cluster.
You upgrade worker nodes in one of two ways:
- By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version for the existing node pool.
- By performing an 'out-of-place' upgrade of a node pool in the cluster, replacing the original node pool with a new node pool for which you've specified a more recent Kubernetes version.
For more information about worker node upgrade, see Upgrading the Kubernetes Version on Worker Nodes in a Cluster.
To find out more about the Kubernetes versions currently and previously supported by Container Engine for Kubernetes, see Supported Versions of Kubernetes.
Notes about Upgrading Clusters
Note the following when upgrading clusters:
- Container Engine for Kubernetes only upgrades the Kubernetes version running on master nodes when you explicitly initiate the upgrade operation.
- After upgrading master nodes to a newer version of Kubernetes, you cannot downgrade the master nodes to an earlier Kubernetes version.
- Before you upgrade the version of Kubernetes running on the master nodes, it is your responsibility to test that applications deployed on the cluster are compatible with the new Kubernetes version. For example, before upgrading the existing cluster, you might create a new separate cluster with the new Kubernetes version to test your applications.
- The versions of Kubernetes running on the master nodes and the worker nodes must be compatible (that is, the Kubernetes version on the master nodes must be no more than two minor versions ahead of the Kubernetes version on the worker nodes). See the Kubernetes version skew support policy described in the Kubernetes documentation.
- If the version of Kubernetes currently running on the master nodes is more than one version behind the most recent supported version, you are given a choice of versions to upgrade to. If you want to upgrade to a version of Kubernetes that is more than one version ahead of the version currently running on the master nodes, you must upgrade to each intermediate version in sequence without skipping versions (as described in the Kubernetes documentation).
To successfully upgrade master nodes in a cluster, the Kubernetes Dashboard service must be of type ClusterIP. If the Kubernetes Dashboard service is not of type ClusterIP (for example, if the service is of type NodePort), the upgrade will fail. In this case, change the type of the Kubernetes Dashboard service back to ClusterIP (for example, by entering
kubectl -n kube-system edit service kubernetes-dashboardand changing the type).
- Prior to Kubernetes version 1.14, Container Engine for Kubernetes created clusters with kube-dns as the DNS server. However, from Kubernetes version 1.14 onwards, Container Engine for Kubernetes creates clusters with CoreDNS as the DNS server. When you upgrade a cluster created by Container Engine for Kubernetes from an earlier version to Kubernetes 1.14 or later, the cluster's kube-dns server is automatically replaced with the CoreDNS server. Note that if you customized kube-dns behavior using the original kube-dns ConfigMap, those customizations are not carried forward to the CoreDNS ConfigMap. You will have to create and apply a new ConfigMap containing the customizations to override settings in the CoreDNS Corefile. For more information about upgrading to CoreDNS, see Configuring DNS Servers for Kubernetes Clusters.