Installing a Cluster with Agent-based Installer Using Terraform

Learn how to provision the OCI infrastructure for the Red Hat OpenShift Container Platform using a Terraform script with the Agent-based installer.

Use the Red Hat OpenShift plugin to automatically provision infrastructure with Terraform. The latest version of the Terraform stack is available on the Stack information page within the Red Hat OpenShift plugin.

You can access the earlier versions of the Terraform from the OpenShift on OCI Releases page on GitHub. Download your required version of the create-cluster zip file from the Assets section.

Important

  • Run the create-resource-attribution-tags stack before running the create-cluster stack to avoid installation failure. See the Prerequisites topic for more information.
  • The create-resource-attribution-tags stack only needs to be run once. If the tag namespace and defined-tags already exist, you can skip this step for future installations.
  • OpenShift requires manage permissions to perform operations on instances, volumes, and networking resources. Deploy OpenShift in a dedicated compartment to avoid conflicts with other applications that might be running in the same compartment.

See Terraform Defined Resources for OpenShift for a list of the resources created by the stack.

  1. Open the navigation menu  and select Developer Services, then select Red Hat OpenShift.
  2. On the Stack information page, enter an optional Name and Description.
    • The latest version of the Terraform stack is automatically uploaded for you.
    • The Create in compartment and Terraform version fields are prepopulated.
  3. On the Configure variables page, review and configure the required variables for the infrastructure resources that the stack creates when you run the Apply job for the first time.
  4. In the OpenShift Cluster Configuration section:
    • Change the Installation Method to Agent-based.
    • Clear the Create OpenShift Image and Instances checkbox.

    OpenShift Cluster Configuration

    Field Description
    Tenancy OCID

    The OCID of the current tenancy.

    Default value: Current tenancy

    Compartment

    The OCID of the compartment where you want to create the OpenShift cluster.

    Default value: Current compartment

    Cluster Name

    The OpenShift cluster name.

    Note: Use the same value for cluster name that you specified in your agent-config.yaml and install-config.yaml files and ensure that it's DNS compatible.

    Installation Method

    The installation method you want to use for installing the cluster.

    Note: For Agent-based installation, select Agent-based.

    Default value: Assisted Installer

    Create OpenShift Image and Instances

    Enables the creation of OpenShift image and instances.

    Default value: True

    OpenShift Image Source URI

    The pre-authenticated Request URL created in the previous task, Uploading Red Hat ISO Image to Object Storage.

    Note: This field is visible only when the Create OpenShift Image and Instances checkbox is selected later.

  5. In the OpenShift Resource Attribution Tags section, review and configure the following fields:

    OpenShift Resource Attribution Tags
    Field Description
    Tag Namespace Compartment For OpenShift Resource Attribution Tags

    The tag namespace compartment for OpenShift resource attribution tags.

    Note: Ensure that the specified tag exists before applying the Terraform stack. The compartment where the tag namespace for resource tagging is created defaults to the current compartment. For OpenShift Attribution on OCI resources, the tag namespace and defined tags are set to openshift-tags and openshift-resource. If the openshift-tags namespace already exists, ensure it's correctly defined and applied. For example: defined-tags" - {"openshift-tags"- {"openshift-resource" - "openshift-resource-infra"} }

  6. In the Network Configuration section, review and configure the following fields:

    Network Configuration
    Field Description
    Create Public DNS

    Create a public DNS zone based on the Base domain specified in the Zone DNS field.

    Note: If you don't want to create a public DNS, create a private DNS zone unless you use your own DNS solution. To resolve cluster hostnames without DNS, add entries to /etc/hosts that map cluster hostnames to the IP address of the api_apps Load Balancer using the etc_hosts_entry output.

    Default value: True

    Enable Public API Load Balancer

    Creates a Load Balancer for the OpenShift API endpoint in the public subnet with a public IP address for external access.

    Note: If unselected, the Load Balancer is created in a private subnet with limited access within the VCN or connected private networks. In on-premises environments (for example, C3/PCA), public IPs might be internal (RFC 1918). Public access is helpful for remote management, automation, and CI/CD. Contact your network administrator if needed.

    Enable Public Apps Load Balancer

    Creates a Load Balancer for OpenShift applications in the public subnet with a public IP address for external access to workloads.

    Note: If unselected, the Load Balancer is created in a private subnet, limiting access to within the VCN or over a VPN/private network. Public access is useful for internet-facing apps, customer services, or multitenant workloads. In on-premises setups (for example, C3/PCA), public IPs might be internal (RFC 1918). Contact your network team to ensure proper exposure.

    Default value: True

    Create Private DNS

    Creates a private DNS zone based on the Base domain, specified in the Zone DNS field, to support hostname resolution within the VCN.

    Note: The private DNS zone contains the same records as the public DNS zone. If using an unregistered Base domain, we recommend to create a private DNS zone, otherwise, you might need alternative methods to resolve cluster hostnames. For more information, see Private DNS.

    Zone DNS The base domain for your cluster's DNS records (for example, devcluster.openshift.com), which is used to create public or private (or both) DNS Zones. This value must match what is specified for baseDomain in the install-config.yaml file.
    VCN DNS Label

    A DNS label for the VCN, used with the VNIC's hostname and subnet's DNS label to form a Fully Qualified Domain Name (FQDN) for each VNIC within this subnet. (for example, bminstance1.subnet123.vcn1.oraclevcn.com).

    Default value: openshiftvcn

    VCN CIDR

    The IPv4 CIDR blocks for the VCN of your OpenShift Cluster.

    Default value: 10.0.0.0/16

    Public Subnet CIDR

    The IPv4 CIDR blocks for the public subnet of your OpenShift Cluster.

    Default value: 10.0.0.0/20

    Private Subnet CIDR for OCP

    The IPv4 CIDR blocks for the private subnet of your OpenShift Cluster.

    Default value: 10.0.16.0/20

    Reserved Private Subnet CIDR for Bare Metal

    The IPv4 CIDR blocks for the private subnet of OpenShift Bare metal Clusters.

    Default value: 10.0.32.0/20

    Rendezvous IP

    The IP address used to bootstrap the cluster with the Agent-based Installer. It must match the rendezvousIP in agent-config.yaml.

    Note: For Bare metal instances, this IP must be within the private_subnet_ocp subnet CIDR.

    Load Balancer Maximum Bandwidth

    Bandwidth (Mbps) that decides the maximum bandwidth (ingress plus egress) that the Load Balancer can achieve. The values must be between minimumBandwidthInMbps and 8000.

    Default value: 500

    Load Balancer Minimum Bandwith

    Bandwidth (Mbps that decides the total pre-provisioned bandwidth (ingress plus egress). The values must be between 10 and the maximumBandwidthInMbps.

    Default value: 10

  7. (Optional) In the Advanced Configuration section, review and configure the following fields:

    Advanced Configurations
    Field Description
    OCI CCM and CSI Driver Version

    The OCI CCM and CSI driver version to deploy. For more information see the list of driver versions.

    Default value: v1.30.0

    Use Existing Instance Role Tags

    Indicates whether to reuse existing instance role tag namespace and defined tags when tagging OCI resources. By default, a new set of instance role tagging resources are created and destroyed with the rest of the cluster resources.

    If required, create instance role tag resources separately that can be reused with the Terraform stack from the create-instance-role-tags page in the oci-openshift GitHub repo. The existing instance role tagging resources aren't destroyed when the cluster is deleted.

    (Optional) Instance Role Tag Namespace Name

    The name of the instance role tag namespace to reuse.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: openshift-{cluster_name}

    Instance Role Tag Namespace Compartment OCID

    The compartment containing existing instance role tag namespace.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: Current compartment

  8. Select Next.
  9. On the Review page, review the stack information and variables.
  10. Select Create to create the stack. The Console redirects to the Resource Manager stack details page for the new stack.
  11. On the stack details page, select Apply to create an apply job and provision the infrastructure for the cluster. After running an apply job, get the job's details to check its status. Succeeded (SUCCEEDED) indicates that the job has completed. When it completes, all the OpenShift resources except the ISO image and the Compute instances are created.
  12. On the Stacks page in Resource Manager, select the name of the stack to see its details. If the list view of jobs isn't displayed, select Jobs under the Resources section to see the list of jobs.
  13. Select the job for the stack creation. The Job details page is displayed in the Console.
  14. Select Outputs under the Resources section to see the list of outputs for the job.
  15. For the output dynamic_custom_manifest select show to view the contents of the output.
  16. Select copy to copy the contents of the output. Note: We recommend that you don't manually select and copy the text, as this can cause problems with indentation when you paste this output in the following step.
  17. Using a text or code editor, save the copied output to a new manifest.yaml file.
  18. Create configuration files and a bootable ISO image for installing your cluster. See Creating configuration files for installing a cluster on OCI (Red Hat documentation) for instructions.
    Return to this documentation after you create the configuration files and ISO image, then continue the installation.
  19. Upload the discovery ISO image file to a bucket in OCI Object Storage. For Putting Data into Object Storage if you need instructions.
  20. In the OCI Console, create an Object Storage bucket and upload the discovery ISO to the bucket.
  21. Find the uploaded discovery ISO and complete the following steps:
    1. Create a pre-authenticated request for the ISO in Object Storage.
    2. Copy the generated URL to use as the OpenShift Image Source URI in the next step. For more information, see Pre-Authenticated Requests for Object Storage.
  22. Navigate to the Resource Manager service and access the stack details page for the stack you created to install OpenShift.
  23. Under Resources, select Variables.
  24. Select Edit variables.
  25. In the OpenShift Cluster Configuration section:
    • Select the Create OpenShift Image and Instances checkbox.
    • Paste the pre-authenticated request string into the OpenShift Image Source URI field.
  26. Review the values in the Control Plane Node Configuration and Compute Node Configuration sections.
    Note

    Ensure that the Control Plane Node Count and Compute Node Count values match the values in the install-config.yaml file.

    Control Plane Node Configuration

    Note: This section is visible only when the Create OpenShift Image and Instances checkbox is selected.

    Field Description
    Control Plane Shape

    The compute instance shape of the control plane nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Control Plane Node Count

    The number of control plane nodes in the cluster.

    Default value: 3

    Control Plane Node OCPU

    The number of OCPUs available each control plane node shape.

    Default value: 4

    Control Plane Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 16

    Control Plane Node Boot Volume

    The boot volume size (GB), of each control plane node.

    Default value: 1024

    Control Plane Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each control plane node.

    Default value: 100

    Distribute Control Plane Instances Across ADs

    Enables automatic distribution of control-plane instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Control Plane Instances Across FDs

    Distributes control plane instances across Fault Domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

    Compute Node Configuration

    Note: This section is visible only when the Create OpenShift Image and Instances checkbox is selected.

    Field Description
    Compute Shape

    The instance shape of the compute nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Compute Node Count

    The number of compute nodes in the cluster.

    Default value: 3

    Compute Node OCPU

    The number of OCPUs available each compute node shape.

    Default value: 6

    Compute Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 16

    Compute Node Boot Volume

    The boot volume size (GB) of each control plane node.

    Default value: 100

    Compute Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each compute node.

    Default value: 30

    Distribute Compute Instances Across ADs

    Enables automatic distribution of compute instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Compute Instances Across FDs

    Distributes compute instances across Fault Domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

  27. In the Networking Configuration section, verify the value in the Rendezvous IP field. This IP address is assigned to one of the control plane instances, which then acts as the bootstrap node for the cluster.
    Note

    Ensure the value you provide in the Rendezvous IP field matches the rendezvousIP in agent-config.yaml. For Bare metal instances, this IP must be within the private_subnet_ocp subnet CIDR.
  28. Select Next to review and save the changes.
  29. On the stack details page, select Apply to run another apply job for the stack. The job creates the custom software image used by the Compute instances in the cluster, and it provisions the Compute instances. After the compute instances are provisioned by the stack, the cluster installation begins automatically.

    What's Next? Follow the instructions in Verifying that your Agent-based cluster installation runs on OCI (Red Hat documentation)to verify your cluster is running. This step is performed in the OpenShift Container Platform CLI.