5.1.1 Kubernetes Cluster Requirements

OAA, OARM, and OUA is composed of multiple components that run as microservices in a Kubernetes cluster, managed by Helm charts. Specifically, each component (microservice) is composed as a Kubernetes Pod, which is deployed to a Kubernetes node in the cluster.

5.1.1.1 Configuring a Kubernetes Cluster

You must install a Kubernetes cluster that meets the following requirements:

  • The Kubernetes cluster must have a minimum of one master (control plane) node and two worker nodes.
  • The nodes must meet the following system minimum specification requirements:
    System Minimum Requirements
    Memory 64 GB RAM
    Disk 150 GB
    CPU 8 x CPU with (Virtualization support. For example, Intel VT)
  • An installation of Helm is required on the Kubernetes cluster. Helm is used to create and deploy the necessary resources.
  • A supported container engine must be installed and running on the Kubernetes cluster.
  • The Kubernetes cluster and container engine must meet the minimum version requirements outlined in Supported Virtualization and Partitioning Technologies for Oracle Fusion Middleware.
  • The nodes in the Kubernetes cluster must have access to a shared volume such as a Network File System (NFS) mount. Ths NFS mounts are used by the Management Container pod during installation, during runtime for the File Based Vault (if not using OCI based vault), and for other post installation tasks such as loading geo-location data.

Note:

This documentation does not explain how to configure a Kubernetes cluster given the products can be deployed on any compliant Kubernetes vendor. If you need to understand how to configure a Kubernetes cluster ready for an OAA, OARM, and OUA deployment, you can follow the Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster.

5.1.1.2 Configuring NFS Volumes

All nodes in the Kubernetes cluster require access to shared volumes on an NFS server. During the installation, the management container pod stores configuration information, credentials, and logs in the NFS volumes. Once the installation is complete the pods require access to a volume that contains the file based vault (if not using OCI based vault), for storing and accessing runtime credentials.

The following NFS volumes must be created prior to the installation. In all cases the NFS export path must have read/write/execute permission for all. Make sure the NFS volumes are accessible to all nodes in the cluster.

Volume Description Path
Configuration A NFS volume which stores the OAA configuration such as installOAA.properties. <NFS_CONFIG_PATH>
Credentials A NFS volume which stores OAA credentials such as Kubernetes and Helm configuration, SSH key, PKCS12 files, and the OAA and OUA TAP partner keystores. <NFS_CREDS_PATH>
Logs A NFS volume which stores OAA installation logs and status. <NFS_LOGS_PATH>
File based vault A NFS volume which stores OAA runtime credentials. <NFS_VAULT_PATH>

5.1.1.3 Configuration Checkpoint

  1. Before proceeding make sure you have the following information for your Kubernetes cluster:
    Variable Your Value Sample Value Description
    <K8S_WORKER_HOST1>,<K8S_WORKER_HOST2>, <K8S_WORKER_HOST3>   worker1.example.com,worker2.example.com,worker3.example.com The fully qualified hostname of the worker nodes.
    <NFS_HOST>   nfs.example.com The fuly qualified hostname of the NFS Server used by the Kubernetes Cluster.
    <NFS_MOUNT_PATH>   /nfs/mountOAApv The mount path on NFS server that the Kubernetes worker nodes can access.
    <NFS_CONFIG_PATH>   /nfs/mountOAApv/OAAConfig The path on NFS server to the <NFS_CONFIG_PATH>.
    <NFS_CREDS_PATH>   /nfs/mountOAApv/OAACreds The path on NFS server to the <NFS_CREDS_PATH>.
    <NFS_LOGS_PATH>   /nfs/mountOAApv/OAALogs The path on NFS server to the <NFS_LOGS_PATH>.
    <NFS_VAULT_PATH>   /nfs/mountOAApv/OAAVault The path on NFS server to the <NFS_VAULT_PATH>.
  2. Check that Kubernetes is working by running the following command on the bastion node, or master node/control plane:
    kubectl get nodes 
    Make sure all the nodes return a STATUS of Ready, for example:
    
    NAME           STATUS   ROLES           AGE   VERSION
    worker-node1   Ready    <none>          76d   v1.29.3+3.el8
    master-node    Ready    control-plane   76d   v1.29.3+3.el8
    worker-node2   Ready    <none>          76d   v1.29.3+3.el8
    worker-node3   Ready    <none>          76d   v1.29.3+3.el8
  3. From the bastion node, or master node/control plane, check the permissions on the <NFS_CONFIG_PATH>, <NFS_CREDS_PATH>, <NFS_LOGS_PATH>, and <NFS_VAULT_PATH>, and make sure they have rwx permissions for all. For example, if the directories are all in <NFS_MOUNT_PATH> /nfs/mountOAApv:
    ls -l /nfs/mountOAApv
    drwxrwxrwx. 3 opc opc  3 <DATE> OAAConfig
    drwxrwxrwx. 2 opc opc 17 <DATE> OAACreds
    drwxrwxrwx. 2 opc opc 34 <DATE> OAALogs
    drwxrwxrwx. 2 opc opc  0 <DATE> OAAVault