Creating a Bring Your Own Cluster

Create a Kubernetes cluster using the byo provider.

These steps provide a high level overview of the process to create a Kubernetes cluster using the byo provider. Many options are available to perform each step, so we don't provide detailed steps. You can decide which methods and options to use in many of these steps.

Important:

The steps and commands shown here are provided as examples only and must be adapted to suit a specific deployment.

  1. (Optional) Set the location of the kubeconfig file for an existing cluster.

    A Kubernetes cluster is required to perform some steps. You can use an existing cluster for this purpose by setting the location of the kubeconfig file.

    You can set this using the KUBECONFIG environment variable, or using the --kubeconfig option with ocne commands. You could also set this in a configuration file.

    If you don't set the location of the kubeconfig file, an ephemeral cluster is created using the libvirt provider when required.

  2. (Optional) Set up the libvirt provider.

    If you don't set the location of an existing Kubernetes cluster, set up the localhost to provision ephemeral clusters using the libvirt provider. For information on setting up the libvirt provider, see Setting Up the libvirt Provider.

  3. Prepare the automated Oracle Linux installation.

    Decide on the method to perform an automated install of Oracle Linux on the hosts using a Kickstart file. For example, you might want to use a network drive, a web server, or a USB drive. You only need to provision the kernel and initrd (initial ramdisk) that matches the boot kernel.

    We recommend you use an Oracle Linux UEK boot ISO file as the boot media as it contains the required kernel and initrd, in a smaller file size. Download Oracle Linux ISO files from the Oracle Linux yum server.

    Prepare the Oracle Linux boot media using the method you select.

    For more information about the automated installation options for Oracle Linux, see Oracle Linux 9: Installing Oracle Linux or Oracle Linux 8: Installing Oracle Linux.

  4. Create an OSTree archive container image.

    Use the ocne image create command to generate an OSTree archive container image, then upload it to somewhere it can be used during the installation using the ocne image upload command. For information on creating an OSTree image and uploading it to a container registry, see Creating an OSTree Image for the Bring Your Own Provider.

    If you don't have a container registry, you might prefer to load the OSTree archive image to a local container runtime. For example, to load an OSTree archive file for an arm64 image to Podman on the localhost, you might use:

    podman load < $HOME/.ocne/images/ock-1.31-arm64-ostree.tar
  5. Create a container to serve the OSTree image.

    Use any container runtime you like, including on a Kubernetes cluster. For example, to use an image loaded into Podman on the localhost, you might use:

    podman run -d --name ock-content-server -p 8080:80 localhost/ock-ostree:latest
  6. Set up the location of Ignition files.

    Decide how you want to make the Kubernetes cluster Ignition files available.

    An Ignition file must be available to all hosts during their first boot. Ignition files can be served using any of the platforms listed in the upstream Ignition documentation, for example, using a Network File Server (NFS), or a web server. You could also embed the Ignition configuration file directly on to the root file system of the host if the installation is done reasonably close to when the Ignition configuration was generated.

  7. Create a Kickstart file.

    A Kickstart file defines an automated installation of Oracle Linux. Include the information to use the OSTree in the installation in the Kickstart file. For information on creating a Kickstart file, see Oracle Linux 9: Installing Oracle Linux or Oracle Linux 8: Installing Oracle Linux.

    The Kickstart file must be made available during the installation. It might be useful to include the Kickstart file in the same location as the Ignition file, for example, using NFS, or a web server.

    The Kickstart file must include the OSTree image information, and the information about the location of the Ignition file, if you aren't embedding this information on the root file system. The Ignition file is created later. For example, a bare metal installation might use something similar to:

    ...
    services --enabled=ostree-remount
    bootloader --append "rw ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.config.url=http://myhost.example.com/ignition.ign ignition.firstboot=1"
    ostreesetup --nogpg --osname ock --url http://myregistry.example.com/ostree --ref ock
    %post
    %end

    For information about OSTree, see the upstream OSTree documentation.

  8. Create a Kubernetes cluster configuration file.

    Generate a cluster configuration file that defines the Kubernetes cluster to create. Ensure the provider is set to byo. In this example, a virtual IP of 192.168.124.230 is used. The virtual IP must be an unused IP address in the network to be used for the Kubernetes Cluster API Server.

    For example:

    provider: byo
    name: byocluster
    virtualIp: 192.168.122.230
    providers:
      byo:
        networkInterface: enp1s0

    For information on what can be included in the cluster configuration file, see Cluster Configuration Files.

  9. Generate and expose the Kubernetes cluster Ignition information.

    The byo provider doesn't provision any infrastructure resources. Unlike other providers that create Kubernetes cluster nodes automatically, the byo provider generates the Ignition configuration that applies to the node type and cluster configuration.

    Use the ocne cluster start command to generate the Ignition information that starts the first control plane node. The syntax is:

    ocne cluster start 
    [{-u|--auto-start-ui} {true|false}]
    [{-o|--boot-volume-container-image} URI]
    [{-C|--cluster-name} name]
    [{-c|--config} path] 
    [{-n|--control-plane-nodes} integer] 
    [{-i|--key} path]
    [--load-balancer address]
    [{-P|--provider} provider]
    [{-s|--session} URI]
    [{-v|--version} version]
    [--virtual-ip IP]
    [{-w|--worker-nodes} integer]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    Use the cluster configuration file with this command, and save the output to an Ignition file. For example:

    ocne cluster start --config myconfig.yaml > ignition.ign

    Tip:

    The Ignition file can be inspected using the jq utility, for example:

    jq . < ignition_file.ign
    {
      "ignition": {
        "config": {
          "replace": {
            "verification": {}
          }
        },
        "proxy": {},
        "security": {
          "tls": {}
    ...

    Expose the Ignition file so it's available during the installation.

  10. Boot the first control plane node.

    Install an Oracle Linux host using the Kickstart file. The Kickstart file uses the Oracle Linux boot ISO, the OSTree image, and the Kubernetes cluster Ignition file to set up the host. This host is to be used as the first control plane node to start the Kubernetes cluster.

  11. Start the Kubernetes cluster with the control plane node.

    Use the ocne cluster start command to start the Kubernetes cluster and install any configured software into the cluster. The syntax is:

    ocne cluster start 
    [{-u|--auto-start-ui} {true|false}]
    [{-o|--boot-volume-container-image} URI]
    [{-C|--cluster-name} name]
    [{-c|--config} path] 
    [{-n|--control-plane-nodes} integer] 
    [{-i|--key} path]
    [--load-balancer address]
    [{-P|--provider} provider]
    [{-s|--session} URI]
    [{-v|--version} version]
    [--virtual-ip IP]
    [{-w|--worker-nodes} integer]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    Use the cluster configuration file to start the cluster. For example:

    ocne cluster start --config myconfig.yaml

    The control plane node is used to start the cluster, install the UI, application catalog, and any applications. The control plane node is now a single node Kubernetes cluster, configured as specified in the cluster configuration file.

  12. Confirm the control plane node is added to the cluster.

    Use the kubectl get nodes command to confirm the control plane node is added to the cluster. This might take a few moments.

    kubectl get nodes

    For information on installing kubectl and setting up the kubeconfig file, see Connecting to a Cluster.

  13. Generate and expose an Ignition file for a worker node.

    Use the ocne cluster join command to generate the Ignition information that joins a worker node to the cluster. The syntax is:

    ocne cluster join 
    [{-c|--config} path] 
    [{-d|--destination} path]
    [{-N|--node} name]
    [{-P|--provider} provider]
    [{-r|--role-control-plane}]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    Use the cluster configuration file and save the output to a file. For example:

    ocne cluster join --kubeconfig $HOME/.kube/kubeconfig.byocluster --config myconfig.yaml > mycluster-join-w.ign

    Important:

    Set the location of the BYO cluster using the --kubeconfig command option. This option is required for this command.

    A token to join the cluster is generated by this command and the command to use this token is displayed. The token is included in the Ignition file. You use this token to join the worker node to the cluster in the next step.

    Expose the Ignition file using the same method you used for the first control plane node. You can either overwrite the Ignition file for the first control plane node, or edit the Kickstart file to set the location of the worker node Ignition file.

  14. Create a Kubernetes bootstrap token for a worker node.

    Use the token printed from the ocne cluster join command in the previous step to join the worker node to the cluster.

    echo "chroot /hostroot kubeadm token create token" | ocne cluster console --node node_name

    This command connects to the control plane node's console using the ocne cluster console command, and creates the token in the single node cluster that's running on that node. The token is displayed in the output.

    Tip:

    You can reuse this bootstrap token to add more nodes within the token expiration time allocated by Kubernetes. Or you can create a token for each node.

  15. Boot the worker node.

    Install an Oracle Linux host using the Kickstart file. This host is to be used as the first worker node in the Kubernetes cluster.

  16. Confirm the worker node is added to the cluster.

    Use the kubectl get nodes command to confirm the worker node is added to the cluster. This might take a few moments.

    kubectl get nodes
  17. Generate and expose an Ignition file for a second control plane node.

    Use the ocne cluster join command to generate the Ignition information that joins a control plane node to the cluster. For example:

    ocne cluster join --kubeconfig $HOME/.kube/kubeconfig.byocluster --role-control-plane --config myconfig.yaml > mycluster-join-cp.ign

    An encrypted certificate bundle and token to join the cluster are generated and displayed by this command. The token is included in the Ignition file. You use this token and certificate bundle to join the control plane node to the cluster in the next step.

    Expose the Ignition file.

  18. Create a Kubernetes bootstrap token for the second control plane node.

    When adding a control plane node, two things need to be created: a join token, and an encrypted certificate bundle. These were dynamically created by the ocne cluster join command in the previous step.

    Create the certificate bundle:

    echo "chroot /hostroot kubeadm init phase upload-certs --certificate-key certificate-key --upload-certs" | ocne cluster console --node node_name

    This command connects to the control plane node's console using the ocne cluster console command, and creates the certificate bundle in the cluster.

    Create the join token:

    echo "chroot /hostroot kubeadm token create token" | ocne cluster console --node node_name
  19. Boot the second control plane node.

    Install an Oracle Linux host using the Kickstart file. This host is to be used as the second control plane node in the Kubernetes cluster.

  20. Confirm the control plane node is added to the cluster.
    kubectl get nodes

    While control plane nodes are joining the cluster, there might be periodic errors reported by kubectl as control plane components adapt to the new node. These errors stop after a few seconds if the node is correctly added to the cluster.

  21. Repeat the process to add worker or control plane nodes as needed.