Creating Worker Nodes with Updated Properties

Find out about the different ways to update worker node properties using Container Engine for Kubernetes (OKE).

Note

This section applies to managed nodes only.

You use Container Engine for Kubernetes to set the properties of worker nodes in a cluster. When your worker node property requirements change, you can add new node pools (see Adding and Removing Node Pools to Scale Clusters Up and Down) with the required worker node properties. Alternatively, you can modify an existing node pool so that new worker nodes that start in the node pool are created with modified properties (see Modifying Node Pool and Worker Node Properties).

For example, you might want all the managed nodes in a managed node pool to run a new Oracle Linux image. You can add a new managed node pool with the managed node Image property set to the corresponding Oracle Linux image. Or you can modify an existing managed node pool and set the managed node Image property to the corresponding Oracle Linux image.

Note that if you simply change the existing managed node pool's Image property to the corresponding Oracle Linux image, only new managed nodes that start in the node pool will run the new image. Existing managed nodes continue to run the previous Oracle Linux image. However, you can replace existing worker nodes with nodes running updated properties (such as Image) in the following ways:

  • Perform an 'in-place' update, by updating node pool properties and then cycling the nodes to automatically replace all existing worker nodes. First, you modify the existing node pool's worker node properties (for example, by changing the Image property of the existing managed node pool to a more recent Oracle Linux image). Then, you cycle the nodes in the node pool, specifying both a maximum allowed number of new nodes that can be created during the operation, and a maximum allowed number of nodes that can be unavailable. Container Engine for Kubernetes automatically cordons, drains, and terminates existing worker nodes, and creates new worker nodes. When new worker nodes are started in the existing node pool, they have the updated properties you specified. See Performing an In-Place Worker Node Update by Cycling Nodes in an Existing Node Pool.
  • Perform an 'in-place' update, by updating node pool properties and then manually replacing each existing worker node with a new worker node. First, you modify the existing node pool's worker node properties (for example, by changing the Image property of the existing managed node pool to a more recent Oracle Linux image). Then, you delete each worker node in turn, selecting appropriate cordon and drain options to prevent new pods starting and to delete existing pods. You start a new worker node to take the place of each worker node you delete. When new worker nodes are started in the existing node pool, they have the updated properties you specified. See Performing an In-Place Worker Node Update by Manually Replacing Nodes in an Existing Node Pool.
  • Perform an 'out-of-place' update, by replacing the original node pool with a new node pool. First, you create a new node pool and set worker node properties as required (for example, by setting the Image property of the new managed node pool to the required Oracle Linux image). Then, you drain existing worker nodes in the original node pool to prevent new pods starting, and to delete existing pods. Finally, you delete the original node pool. When new worker nodes are started in the new node pool, they have the properties you specified. See Performing an Out-of-Place Worker Node Update by Replacing an Existing Node Pool with a New Node Pool.

Note that in all cases:

  • Special considerations apply when updating the Kubernetes version running on worker nodes in a node pool. Instead of following the instructions in this section, follow the instructions in Upgrading Clusters to Newer Kubernetes Versions.
  • Existing worker nodes in the original node pool are drained. If the worker nodes are not drained, workloads running on the cluster are subject to disruption.