Migrate Cluster Nodes
Describes migrating a node between Kubernetes cluster using the byo
provider.
Nodes in a Kubernetes cluster can be migrated from one cluster to another.
Migrating a node from one cluster to another is most useful when it's not feasible to coordinate adding the key material necessary to join a node to an existing cluster at the same time the host is provisioned. In these cases, the easiest path forward is to create a single node cluster and move the node to the target cluster. In this way, it's possible to stitch together several small clusters into a single, larger, cluster.
Note:
A host running the OCK image always includes a Kubernetes cluster, even if it's a single node cluster.
Migrating nodes between clusters serves two use cases. Nodes can be reallocated to adjust cluster capacity based on short term requirements, without the need to fully reprovision the node. Or, nodes can be provisioned, and added to a cluster at a later time.
Migrating nodes between clusters might be useful where infrastructure management schemes don't have any service level agreements between when a request to provision a system, or set of systems, is created, and when that request is fulfilled. For example, a cluster administrator requires 8 nodes and the IT department fulfills the request for the resources on an unknown timetable, handing over the access information after the nodes are available. The administrator isn't necessarily told when the resources are provisioned, or even provided with a timetable. This means it's difficult to guarantee that any key material required for a node to join a cluster (certificate keys, and join tokens) is valid when the new systems first boot. Even if coordination were possible, the IT department must also understand Kubernetes cluster management enough to perform any manual cluster provisioning. This isn't a feasible work flow for the typical Bring Your Own (BYO) cluster.
To solve the use case described, the intended path is to ask the IT department to make some number of nodes. Each of those systems boot automatically to single node clusters. The administrator then gathers these nodes and builds the cluster topology they require by migrating each node into the larger cluster.
Two Kubernetes clusters are required. While they can be installed using any method, the intended use case is to either boot a host with OCK installed, but unconfigured, or to merge two BYO clusters.
You can migrate nodes with similar configurations, and Kubernetes versions, between clusters. The Kubernetes version on the source and target clusters doesn't need to be the same, but it must be close enough. Close enough means that they must be in the same minor Kubernetes release, or the previous minor release. For example, they must both be running Kubernetes Release 1.31.x, or running one minor release earlier, such as Release 1.30.x.
The ocne cluster join
command is used to migrate a node from one cluster
to another. You provide the source and target cluster configuration information (the
kubeconfig
file), and the name of the node to migrate. The name of
the node must be the same as displayed when using the kubectl get nodes
command. The name of the node isn't changed when it's migrated. The node name stays the
same.
Nodes are migrated as worker nodes, unless you specify the
--role-control-plane
node option of the ocne cluster
join
command.
You can also use the ocne cluster join
command to generate the Ignition
information to join a node to a BYO cluster. For example:
ocne cluster join --kubeconfig $HOME/.kube/kubeconfig.mycluster --config byo.yaml > worker.ign
Important:
Migrating the last control plane node in a cluster destroys that cluster. Ensure you migrate, or remove, all worker nodes first.
Migrating a Cluster Node
Migrate a Kubernetes cluster node, created with the byo
provider, to
another cluster.
Migrating a node from one cluster to another requires the kubeconfig
file for each cluster, and the name of the node to migrate. The name of the node
must be the same as displayed using the kubectl get nodes
command.
Set the location of the source cluster using the --kubeconfig
command option. This option is required for this command.