Create a Cluster on OCI

Learn how to create a Kubernetes cluster on OCI.

When you create a cluster using the oci provider, the following occurs:

  1. The CLI detects if a bootstrap cluster is available. If not, it starts an ephemeral cluster to act as a bootstrap cluster.

  2. Any resources required to start the ephemeral cluster are fetched and installed.

  3. When the bootstrap cluster is available, the configured OCI compartment is checked for compatible compute Oracle Container Host for Kubernetes (OCK) images. If no OCK images are available, they're generated, uploaded to an OCI Object Storage bucket, and imported into the bootstrap cluster. The OCK image is converted from Qcow2 format to a bootable compute image and saved as a custom compute image.

  4. When this process is complete, all Kubernetes Cluster API providers are installed into the bootstrap cluster.

  5. When the Kubernetes Cluster API providers are started, the Kubernetes Cluster API resources are installed into the bootstrap cluster.

  6. When the bootstrap cluster is ready, the OCI cluster is created and set up, including the compute instances, networking, and a network load balancer.

  7. If the OCI cluster is set to be self managed, the bootstrap cluster is deleted.

Creating an OCI Cluster

Create a Kubernetes cluster on OCI using the oci provider.

Creating a Kubernetes cluster on OCI using the oci provider requires that you first set up the localhost to create a bootstrap cluster using the libvirt provider, or have a cluster available to use as the bootstrap cluster. For information on the bootstrap cluster, see OCI Provider.

You must also install and configure the OCI CLI on the localhost to enable access to the OCI compartment where the cluster is created.

You can optionally use a cluster configuration file to specify cluster information such as the number of control plane and worker nodes, the cluster definition file that contains the Cluster API CRs, the Object Storage bucket name (if the default of ocne-images isn't used), or any other number of configuration options.

If settings aren't provided in the cluster configuration file options, but are available in the Cluster API provider, create a cluster template file that contains the CRs for the cluster. You can then manually edit these CRs with the Cluster API options and include the template in the cluster configuration file.

Important:

The number of control plane nodes must be an odd number (1, 3, 5, and so on) to avoid split brain scenarios with High Availability. The default is 1 control plane node and 0 worker nodes.

  1. Set up the localhost to provision clusters using the oci provider.
  2. (Optional) Set up a cluster configuration file.

    A cluster configuration contains the cluster specific information to use when creating the cluster. This file overrides any default configuration (set in the $HOME/.ocne/defaults.yaml file). A cluster configuration file might include:

    provider: oci
    name: mycluster
    providers:
      oci:
        compartment: ocid1.compartment.oc1..uniqueID
        vcn: ocid1.vcn.oc1.uniqueID
        loadBalancer:
          subnet1: ocid1.subnet.oc1.uniqueID
          subnet2: ocid1.subnet.oc1.uniqueID

    Or a more complex cluster configuration file might include:

    provider: oci
    proxy:
      httpsProxy: http://myproxy.example.com:2138
      httpProxy: http://myproxy.example.com:2138
      noProxy: .example.com,127.0.0.1,localhost,169.254.169.254,10.96.0.0/12,10.244.0.0/16
    headless: true
    name: mycluster
    workerNodes: 3 
    controlPlaneNodes: 3
    providers:
      oci:
        profile: MYTENANCY
        selfManaged: false
        imageBucket: my-ocne-images
        compartment: ocid1.compartment.oc1..uniqueID
        vcn: ocid1.vcn.oc1.uniqueID
        loadBalancer:
          subnet1: ocid1.subnet.oc1.uniqueID
          subnet2: ocid1.subnet.oc1.uniqueID
        workerShape: 
          shape: VM.Standard.E4.Flex
          ocpus: 2
        controlPlaneShape: 
          shape: VM.Standard.E4.Flex
          ocpus: 2

    For information on cluster configuration files, see Cluster Configuration Files.

  3. (Optional) Set up a cluster template file.

    A cluster template contains the Kubernetes Cluster API CRs to create a cluster. You can generate a template using the configuration in a cluster configuration file, and edit the resulting CRs to include options that aren't available as an option in the cluster configuration. For information on creating cluster templates, see Cluster API Templates.

    Tip:

    To configure the cluster to use an existing VCN in a compartment, see Using an Existing VCN.

    Include a link to the cluster template in the cluster configuration file. For example, also include the option:

    clusterDefinition: /path/template.yaml
  4. Create the cluster.

    Use the ocne cluster start command to create a cluster. The syntax is:

    ocne cluster start 
    [{-u|--auto-start-ui} {true|false}]
    [{-o|--boot-volume-container-image} URI]
    [{-C|--cluster-name} name]
    [{-c|--config} path] 
    [{-n|--control-plane-nodes} integer] 
    [{-i|--key} path]
    [--load-balancer address]
    [{-P|--provider} provider]
    [{-s|--session} URI]
    [{-v|--version} version]
    [--virtual-ip IP]
    [{-w|--worker-nodes} integer]

    For more information on the syntax options, see Oracle Cloud Native Environment: CLI.

    For example:

    ocne cluster start --provider oci
    ocne cluster start --config myconfig.yaml

Monitoring a Cluster Installation

View the logs for the Kubernetes Cluster API pods to monitor the creation of a Kubernetes cluster on OCI.

You can monitor the deployment of a Kubernetes cluster on OCI by reviewing the logs of several Kubernetes Cluster API pods that are created in the ephemeral (bootstrap) cluster.

Tip:

If you set the cluster to be self managed, the ephemeral cluster is deleted after the deployment succeeds. To view the logs after the ephemeral cluster is deleted, set the cluster's kubeconfig file to the cluster running on OCI, for example:

export KUBECONFIG=$HOME/.kube/kubeconfig.clustername
  1. Set the location of the kubeconfig file for the ephemeral cluster.

    In a separate terminal session, set the location of the ephemeral cluster's kubeconfig file. If you're using the defaults for an ephemeral cluster, this is:

    export KUBECONFIG=$HOME/.kube/kubeconfig.ocne-ephemeral.local
  2. View the events.

    Use the kubectl get events command to get information about the events in the namespace in which the cluster is created. The default namespace is ocne. For example:

    kubectl get events --namespace ocne
  3. View the capoci-controller-manager pod logs.

    Use the kubectl logs command to view the logs for the pod.

    Copy the command listed and press the Tab key to access the full pod name.

    kubectl logs --namespace cluster-api-provider-oci-system capoci-controller-manager
  4. View the control-plane-capi-controller-manager pod logs.

    Use the kubectl logs command to view the logs for the pod.

    Copy the command listed and press the Tab key to access the full pod name.

    kubectl logs --namespace capi-kubeadm-control-plane-system control-plane-capi-controller-manager
  5. View the bootstrap-capi-controller-manager pod logs.

    Use the kubectl logs command to view the logs for the pod.

    Copy the command listed and press the Tab key to access the full pod name.

    kubectl logs --namespace capi-kubeadm-bootstrap-system bootstrap-capi-controller-manager