Installing a Cluster with Assisted Installer

Install an OpenShift Container Platform cluster on OCI using Red Hat's Assisted Installer.

Before you begin, ensure to review the Prerequisites section of this documentation. The workflow for Assisted Installer involves three main steps:
  1. Generating the discovery ISO image (Red Hat Console).
  2. Provisioning the cluster infrastructure (OCI Console).
  3. Completing the installation (Red Hat Console).

Part 2: Provisioning Infrastructure (OCI Console)

Upload the discovery ISO and set up the infrastructure using the Red Hat OpenShift plugin and Resource Manager in the OCI Console.

In the tasks in this section, you work in the OCI Console to upload the discovery ISO image and provision the cluster infrastructure resources as discussed in Configuration Files. See Terraform Defined Resources for OpenShift for a list of the resources created by the Terraform script used for this installation method. Note that you specify an existing compartment and Object Storage bucket for the cluster. See Creating a Compartment and Creating an Object Storage Bucket if you need instructions for creating these resources.

Uploading Red Hat ISO Image to Object Storage

  1. In the OCI Console, create an Object Storage bucket and upload the discovery ISO to the bucket.
  2. Find the uploaded discovery ISO and complete the following steps:
    1. Create a pre-authenticated request for the ISO in Object Storage.
    2. Copy the generated URL to use as the OpenShift Image Source URI in the next step. For more information, see Pre-Authenticated Requests for Object Storage.

Creating OpenShift Container Platform Infrastructure

Follow the steps to set up the necessary infrastructure for the OpenShift cluster.
Note

  • Run the create-resource-attribution-tags stack before running the create-cluster stack to avoid installation failure.
  • The create-resource-attribution-tags stack only needs to be run once. If the tag namespace and defined-tags already exist, you can skip this step for future installations.
  1. Open the navigation menu  and select Developer Services, then select Red Hat OpenShift.
  2. On the Stack information page, enter an optional Name and Description.
    • The latest version of the Terraform stack is automatically uploaded for you.
    • The Create in compartment and Terraform version fields are prepopulated.
  3. (Optional) In the Tags section, add tags to organize the resources.
    1. From the Tag namespace dropdown list, select a relevant tag namespace.
    2. In the Tag key field, specify a tag key.
    3. In the Tag value field, specify a value for the tag.
    4. Select Add tag.
  4. Select Next.
  5. On the Configure variables page, review and configure the required variables for the infrastructure resources that the stack creates when you run the Apply job for this execution plan.
  6. In the OpenShift Cluster Configuration section, review and configure the following fields:

    OpenShift Cluster Configuration

    Field Description
    Tenancy OCID

    The OCID of the current tenancy.

    Default value: Current tenancy

    Compartment

    The OCID of the compartment where you want to create the OpenShift cluster.

    Default value: Current compartment

    Cluster Name

    The OpenShift cluster name.

    Note: Use the same value for cluster name that you specified in the Red Hat Hybrid Cloud Console and ensure that it's DNS compatible.

    Installation Method

    The installation method you want to use for installing the cluster.

    Default value: Assisted Installer

    Create OpenShift Image and Instances

    Enables the creation of OpenShift image and instances.

    Default value: True

    OpenShift Image Source URI

    The pre-authenticated Request URL created in the previous task, Uploading Red Hat ISO Image to Object Storage.

    Note: This field is visible only when the Create OpenShift Image and Instances checkbox is selected.

  7. In the OpenShift Resource Attribution Tags section, review and configure the following field:

    OpenShift Resource Attribution Tags
    Field Description
    Tag Namespace Compartment For OpenShift Resource Attribution Tags

    The tag namespace compartment for OpenShift resource attribution tags.

    Note: Ensure that the specified tag exists before applying the Terraform stack. The compartment where the tag namespace for resource tagging is created defaults to the current compartment. For OpenShift Attribution on OCI resources, the tag namespace and defined tags are set to openshift-tags and openshift-resource. If the openshift-tags namespace already exists, ensure it's correctly defined and applied. For example: defined-tags" - {"openshift-tags"- {"openshift-resource" - "openshift-resource-infra"} }

  8. In the Control Plane Node Configuration section, review and configure the following fields:

    Control Plane Node Configuration

    Note: This section is visible only when the OpenShift Image Source URI checkbox is selected.

    Field Description
    Control Plane Shape

    The compute instance shape of the control plane nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Control Plane Node Count

    The number of control plane nodes in the cluster.

    Default value: 3

    Control Plane Node OCPU

    The number of OCPUs available each control plane node shape.

    Default value: 4

    Control Plane Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 16

    Control Plane Node Boot Volume

    The boot volume size (GB), of each control plane node.

    Default value: 1024

    Control Plane Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each control plane node.

    Default value: 100

    Distribute Control Plane Instances Across ADs

    Enables automatic distribution of control-plane instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Control Plane Instances Across FDs

    Distributes control plane instances across Fault Domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

  9. In the Compute Node Configuration section, review and configure the following fields:

    Compute Node Configuration

    Note: This section is visible only when the OpenShift Image Source URI checkbox is selected.

    Field Description
    Compute Shape

    The instance shape of the compute nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Compute Node Count

    The number of compute nodes in the cluster.

    Default value: 3

    Compute Node OCPU

    The number of OCPUs available each compute node shape.

    Default value: 6

    Compute Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 16

    Compute Node Boot Volume

    The boot volume size (GB) of each control plane node.

    Default value: 100

    Compute Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each compute node.

    Default value: 30

    Distribute Compute Instances Across ADs

    Enables automatic distribution of compute instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Compute Instances Across FDs

    Distributes compute instances across Fault Domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

  10. In the Network Configuration section, review and configure the following fields:

    Network Configuration
    Field Description
    Create Public DNS

    Create a public DNS zone based on the Base domain specified in the Zone DNS field.

    Note: If you don't want to create a public DNS, create a private DNS zone unless you use your own DNS solution. To resolve cluster hostnames without DNS, add entries to /etc/hosts that map cluster hostnames to the IP address of the api_apps Load Balancer using the etc_hosts_entry output.

    Default value: True

    Enable Public API Load Balancer

    Creates a Load Balancer for the OpenShift API endpoint in the public subnet with a public IP address for external access.

    Note: If unselected, the Load Balancer is created in a private subnet with limited access within the VCN or connected private networks. In on-premises environments (for example, C3/PCA), public IPs might be internal (RFC 1918). Public access is helpful for remote management, automation, and CI/CD. Contact your network administrator if needed.

    Enable Public Apps Load Balancer

    Creates a Load Balancer for OpenShift applications in the public subnet with a public IP address for external access to workloads.

    Note: If unselected, the Load Balancer is created in a private subnet, limiting access to within the VCN or over a VPN/private network. Public access is useful for internet-facing apps, customer services, or multitenant workloads. In on-premises setups (for example, C3/PCA), public IPs might be internal (RFC 1918). Contact your network team to ensure proper exposure.

    Default value: True

    Create Private DNS

    Creates a private DNS zone based on the Base domain, specified in the Zone DNS field, to support hostname resolution within the VCN.

    Note: The private DNS zone contains the same records as the public DNS zone. If using an unregistered Base domain, we recommend to create a private DNS zone, otherwise, you might need alternative methods to resolve cluster hostnames. For more information, see Private DNS.

    Zone DNS

    The base domain for your cluster's DNS records (for example, devcluster.openshift.com), which is used to create public or private (or both) DNS Zones. This value must match the Base domain value entered in the Red Hat Hybrid Cloud Console during the creation of the ISO image.

    VCN DNS Label

    A DNS label for the VCN, used with the VNIC's hostname and subnet's DNS label to form a Fully Qualified Domain Name (FQDN) for each VNIC within this subnet. (for example, bminstance1.subnet123.vcn1.oraclevcn.com).

    Default value: openshiftvcn

    VCN CIDR

    The IPv4 CIDR blocks for the VCN of your OpenShift Cluster.

    Default value: 10.0.0.0/16

    Public Subnet CIDR

    The IPv4 CIDR blocks for the public subnet of your OpenShift Cluster.

    Default value: 10.0.0.0/20

    Private Subnet CIDR for OCP

    The IPv4 CIDR blocks for the private subnet of your OpenShift Cluster.

    Default value: 10.0.16.0/20

    Reserved Private Subnet CIDR for Bare Metal

    The IPv4 CIDR blocks for the private subnet of OpenShift Bare metal Clusters.

    Default value: 10.0.32.0/20

    Load Balancer Maximum Bandwidth

    Bandwidth (Mbps) that decides the maximum bandwidth (ingress plus egress) that the Load Balancer can achieve. The values must be between minimumBandwidthInMbps and 8000.

    Default value: 500

    Load Balancer Minimum Bandwith

    Bandwidth (Mbps) that decides the total pre-provisioned bandwidth (ingress plus egress). The values must be between 10 and the maximumBandwidthInMbps

    Default value: 10

  11. (Optional) In the Advanced Configurations section, review and configure the following fields:

    Advanced Configurations
    Field Description
    OCI CCM and CSI Driver Version

    The OCI CCM and CSI driver version to deploy. For more information see the list of driver versions.

    Default value: v1.30.0

    Use Existing Instance Role Tags

    Indicates whether to reuse existing instance role tag namespace and defined tags when tagging OCI resources. By default, a new set of instance role tagging resources are created and destroyed with the rest of the cluster resources.

    If required, create instance role tag resources separately that can be reused with the Terraform stack from the create-instance-role-tags page in the oci-openshift GitHub repo. The existing instance role tagging resources aren't destroyed when the cluster is deleted.

    (Optional) Instance Role Tag Namespace Name

    The name of the instance role tag namespace to reuse.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: openshift-{cluster_name}

    Instance Role Tag Namespace Compartment OCID

    The compartment containing existing instance role tag namespace.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: Current compartment

  12. Select Next.
  13. On the Review page, review the stack information and variables.
  14. Select Create to create the stack. The Console redirects to the Resource Manager stacks details page for the new stack.
  15. On the stack details page, select Apply to create an apply job and provision the infrastructure for the cluster. After running an apply job, get the job's details to check its status. Succeeded (SUCCEEDED) indicates that the job has completed.
    What's Next? After running the apply job for the stack, stay in the Resource Manager section of the OCI Console and perform the steps in Getting the Custom Manifests for Installation.

Getting the Custom Manifests for Installation

After provisioning the infrastructure in the Resource Manager, from the "Outputs" of the stack job, get the dynamic_custom_manifest file. This output contains all the required manifests, concatenated and preformatted with the configuration values for CCM and CSI.

  1. On the Stacks page in Resource Manager, select the name of the stack to see its details. If the list view of jobs isn't displayed, select Jobs under the Resources section to see the list of jobs.
  2. Select the job for the stack creation. The Job details page is displayed in the Console.
  3. Select Outputs under the Resources section to see the list of outputs for the job.
  4. For the output dynamic_custom_manifest select show to view the contents of the output.
  5. Select copy to copy the contents of the output. Note: We recommend that you don't manually select and copy the text, as this can cause problems with indentation when you paste this output in the following step.
  6. Using a text or code editor, save the copied output to a new manifest.yaml file. Upload this file in the Custom Manifest step of the installation process described in Part 3: Installing Cluster (Red Hat Console).

Part 3: Installing Cluster (Red Hat Console)

Return to the Red Hat Hybrid Cloud Console to complete the cluster creation using the Assisted Installer. Follow the steps in Completing the remaining Assisted Installer steps (Red Hat documentation). This includes:

  • Assigning roles to the control plane and compute instances running in OCI.
  • Reviewing the settings for cluster storage and networking.
  • Uploading the manifest.yaml file you created in Part 2: Provisioning Infrastructure (OCI Console) in the Custom Manifest section.
  • Starting the cluster installation.

Accessing Cluster Console

After installation is complete, select Launch OpenShift Console in the Red Hat Hybrid Cloud Console. This opens the management console for the new cluster. The Web Console URL for the cluster console is displayed on the Installation progress page and can be bookmarked in a browser. In the cluster management console, download the kubeconfig file which you use to access the cluster with the OpenShift CLI (oc) or the Kubernetes CLI (kubectl).