Upgrade the Nodes
Upgrade Kubernetes nodes from Release 1 to Release 2.
For each node in the cluster, perform the following steps to upgrade from Release 1 to Release 2.
Important:
The control planes nodes must be upgraded first, then worker nodes.
Preparing Nodes
Remove the Kubernetes nodes from the cluster, and add them back using the Release 2 Ignition configuration.
- Set the location of the
kubeconfig
file.Set the location of the Release 1kubeconfig
file as theKUBECONFIG
environment variable. For example:export KUBECONFIG=~/.kube/kubeconfig.mycluster
Important:
The Kubernetes configuration file must be saved as the name of the Release 1 cluster (the Kubernetes Module name).
- Find the node name.
Find the name of the node to upgrade in the cluster.
kubectl get nodes
- Set the node name.
Set the node name as an environment variable.
export TARGET_NODE=nodename
- Drain the node.
Use the
kubectl drain
command to drain the node.For control plane nodes, use:
kubectl drain $TARGET_NODE --ignore-daemonsets
For worker nodes, use:
kubectl drain $TARGET_NODE --ignore-daemonsets --delete-emptydir-data
- Reset the node.
Use the
ocne cluster console
command to reset the node. The syntax to use is:ocne cluster console
[{-d|--direct}] {-N|--node} nodename [{-t|--toolbox}] [-- command]For more information on the syntax options, see Oracle Cloud Native Environment: CLI.
For example:
ocne cluster console --node $TARGET_NODE --direct -- kubeadm reset -f
- Add the node back into the cluster.
- If the node is a control plane node:
When adding a control plane node, two things need to be created, an encrypted certificate bundle, and a join token.
Run the following command to connect to a control plane node's OS console and create the certificate bundle:
ocne cluster console --node control_plane_name --direct -- kubeadm init phase upload-certs --certificate-key certificate-key --upload-certs
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Important:
This isn't the target node, but a separate control plane node that's used to run the
kubeadm init
command.Replace certificate-key with the output displayed when the Ignition information for control plane nodes was generated using the
ocne cluster join
command.Run the following command to create a join token:
ocne cluster console --node control_plane_name --direct -- kubeadm token create token
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Replace token with the output displayed when the Ignition information for control plane nodes was generated using the
ocne cluster join
command.Important:
If the token was generated more than 24 hours prior, it has likely expired, and you must regenerate the Ignition files, which also generates a fresh token.
Use the
kubectl get nodes
command to confirm the control plane node is added to the cluster. This might take a few moments.kubectl get nodes
- If the node is a worker node:
When adding a worker node, a join token must be created. Run the following command to connect to a control plane node's OS console to perform this step:
ocne cluster console --node control_plane_name --direct -- kubeadm token create token
Replace control_plane_name with a control plane node that's running in the Release 1 cluster.
Replace token with the output displayed when the Ignition information for worker nodes was generated in using the
ocne cluster join
command.Important:
If the token was generated more than 24 hours prior, it has likely expired, and you must regenerate the Ignition files, which also generates a fresh token.
Use the
kubectl get nodes
command to confirm the worker node is added to the cluster. This might take a few moments.kubectl get nodes
- If the node is a control plane node:
Replacing the Boot Volume
Replace the boot volume on the instance with the custom image created when the OCK image for Release 2 was loaded. This completes the upgrade of the Kubernetes node.
For more information about replacing a boot volume, see the OCI documentation.
- Navigate to the Replace Boot Volume page.
Sign in to the OCI console, and navigate to the Replace Boot Volume page for the instance.
- Replace the boot volume.
Select the Image option in the Replace by field.
Select the Select from a list option in the Apply boot volume by field.
Select the custom image from the Select image drop down.
Set the Boot volume size (GB) to be at least 50GB.
Select the Show advanced options section, then select Metadata.
Enter
user_data
in the Name field, and paste in the base64 encoded Ignition information in the Value field. This is the content of the base64 Ignition file for the appropriate node type, either a control plane or a worker node. This was generated in Creating Ignition Files.Click Replace.
- Reboot the instance.
If the instance is running, it shuts down, and reboots using the new boot volume. If the instance is stopped, start the instance to boot it using the new boot volume.
The instance boots using the new boot volume. The instance is joined into the cluster using Ignition information contained in the boot volume.
Uncordon Nodes
Uncordon the Kubernetes nodes to enable cluster workloads to run.
- Find the node name.
Find the name of the node to uncordon.
kubectl get nodes
- Uncordon the node.
Use the
kubectl uncordon
command to uncordon the node.kubectl uncordon node_name
For example:
kubectl uncordon ocne-control-plane-1
- Verify the node is available.
Use the
kubectl get nodes
command to confirm theSTATUS
column is set toReady
.kubectl get nodes
Validating the Node Upgrade
Validate a node is running the Release 2 OS.
- List the nodes in the cluster.
List the nodes in the Kubernetes cluster to ensure all expected nodes are listed.
kubectl get nodes
- Show information about the node.
Use the
ocne cluster info
command to display information about the node. The syntax is:ocne cluster info
[{-N|--nodes}] nodename, ... [{-s|--skip-nodes }]For more information on the syntax options, see Oracle Cloud Native Environment: CLI.
For example:
ocne cluster info --nodes ocne-control-plane-1
- Validate the node information.
The node is running the Release 2 image if the output of the
ocne cluster info
command looks similar to:Node: ocne-control-plane-1 Registry and tag for ostree patch images: registry: container-registry.oracle.com/olcne/ock-ostree tag: 1.29 transport: ostree-unverified-registry Ostree deployments: ock 5d6e86d05fa0b9390c748a0a19288ca32bwer1eac42fef1c048050ce03ffb5ff9.1 (staged) * ock 5d6e86d05fa0b9390c748a0a19288ca32bwer1eac42fef1c048050ce03ffb5ff9.0
The OSTree based image information is displayed in the output.
The node isn't running the Release 2 image if the output is missing this information, and looks similar to:
Node: ocne-control-plane-2 Registry and tag for ostree patch images: Ostree deployments: