Upgrading an Oracle Linux Virtualization Manager Cluster to a Kubernetes Minor Release
Upgrade an Oracle Linux Virtualization Manager cluster to the next Kubernetes minor release.
For clusters running on Oracle Linux Virtualization Manager, and created using the
olvm
provider, upgrade a Kubernetes cluster to the next minor Kubernetes
version when an OCK image becomes available for that
version. This uses the Kubernetes Cluster API to scale nodes in and out. To perform an in
place upgrade, where nodes aren't reprovisioned, use the steps in Upgrading to a Kubernetes Minor Release.
- Create an OCK image.
Create and upload a new OCK image that contains the updated Kubernetes version. For information creating an OCK image, see Creating an OCK Image for the Oracle Linux Virtualization Manager Provider.
- Create a VM template.
Create a VM template that uses the new OCK image. This is used to create new VMs with the updated version of Kubernetes. For information on creating a VM template, see Creating an Oracle Linux Virtualization Manager VM Template.
- Edit the cluster configuration file.
Edit the cluster configuration file used to create the Kubernetes cluster. Change the
vmTemplateName
entries to match the new VM template name. Change this entry in both thecontrolPlaneMachine
andworkerMachine
sections. For example:providers: olvm: ... controlPlaneMachine: vmTemplateName: ock-1.32 ... workerMachine: vmTemplateName: ock-1.32 ...
- Set the location of the management cluster.
The management cluster contains the Kubernetes Cluster API controllers. The management cluster might be the same as the workload cluster (a self-managed cluster). The upgrade is performed using the management cluster.
Set the
kubeconfig
file location to the management cluster using an environment variable.export KUBECONFIG=$(ocne cluster show --cluster-name cluster-name)
Replace cluster_name with the name of the management cluster.
- Set the target Kubernetes version.
Use
ocne cluster stage
command to stage the target Kubernetes version. Use the configuration file used to create the workload cluster with this command. The syntax to use is:ocne cluster stage
[{-c|--config} path] [{-r|--os-registry} registry] [{-t|--transport} transport] {-v|--version} versionFor more information on the syntax options, see Oracle Cloud Native Environment: CLI.
For example:
ocne cluster stage --version 1.32 --config mycluster.yaml
The output of this command prints important information. For example, the output might look similar to:
To update KubeadmControlPlane ocne-control-plane in olvm-cluster, run: kubectl patch -n olvm-cluster kubeadmcontrolplane ocne-control-plane --type=json -p='[{"op":"replace","path":"/spec/version","value":"1.32.0"},{"op":"replace","path":"/spec/machineTemplate/infrastructureRef/name","value":"ocne-control-plane-1"}]' To update MachineDeployment ocne-md-0 in olvm-cluster, run: kubectl patch -n olvm-cluster machinedeployment ocne-md-0 --type=json -p='[{"op":"replace","path":"/spec/template/spec/version","value":"1.32.0"},{"op":"replace","path":"/spec/template/spec/infrastructureRef/name","value":"ocne-md-1"}]'
- Patch the KubeadmControlPlane for control plane nodes.
Use the
kubectl patch
command to update the KubeadmControlPlane. Use the command printed in the output of theocne cluster stage
command. For example:kubectl patch -n olvm-cluster kubeadmcontrolplane ocne-control-plane --type=json -p='[{"op":"replace","path":"/spec/version","value":"1.32.0"},{"op":"replace","path":"/spec/machineTemplate/infrastructureRef/name","value":"ocne-control-plane-1"}]'
The control plane nodes are reprovisioned using the new OCK image that includes the new version of Kubernetes. This might take some time.
Tip:
Monitor new nodes are being provisioned, and old nodes are being removed, using:
kubectl --namespace namespace get machine
You can also monitor the VMs are being created and destroyed using the Oracle Linux Virtualization Manager console.
- Confirm control plane nodes are upgraded.
Confirm all nodes are upgraded in the workload cluster. In a separate terminal, set the
kubeconfig
file location to the workload cluster using an environment variable.export KUBECONFIG=$(ocne cluster show --cluster-name cluster-name)
Replace cluster_name with the name of the workload cluster.
List the nodes in the cluster.
kubectl get nodes
Confirm the
VERSION
column lists the new Kubernetes version number. - Update the MachineDeployment for worker nodes.
In the terminal where the
kubeconfig
is set to the management cluster, use thekubectl patch
command to update the MachineDeployment for worker nodes. Use the command printed in the output of theocne cluster stage
command. For example:kubectl patch -n olvm-cluster machinedeployment ocne-md-0 --type=json -p='[{"op":"replace","path":"/spec/template/spec/version","value":"1.32.0"},{"op":"replace","path":"/spec/template/spec/infrastructureRef/name","value":"ocne-md-1"}]'
The worker nodes are reprovisioned using the new OCK image, with the new version of Kubernetes. This might take some time.
- Confirm worker nodes are upgraded.
Confirm all worker nodes are upgraded in the workload cluster. In the separate terminal, where the
kubeconfig
is set to the workload cluster, list the nodes in the cluster.kubectl get nodes
Confirm the
VERSION
column lists the new Kubernetes version number.