7.4.1 Deploying OUD Using a YAML File

To deploy Oracle Unified Directory (OUD) using a YAML file:
  1. Navigate to the $WORKDIR/kubernetes/helm14c directory:
    cd $WORKDIR/kubernetes/helm14c
  2. Create an oud-ds-rs-values-override.yaml as follows:
    image:
      repository: <image_location>
      tag: <image_tag>
      pullPolicy: IfNotPresent
    imagePullSecrets:
      - name: orclcred
    oudConfig:
     # memory, cpu parameters for both requests and limits for oud instances
      resources:
        limits:
          cpu: "1"
          memory: "4Gi"
        requests:
          cpu: "500m" 
          memory: "4Gi"
      rootUserPassword: <password>
      sampleData: "200"
    persistence:
      type: filesystem
      filesystem:
        hostPath:
          path: <persistent_volume>/oud_user_projects
    cronJob:
      kubectlImage:
        repository: bitnami/kubectl
        tag: <version>
        pullPolicy: IfNotPresent
     
      imagePullSecrets:
        - name: dockercred
    For example:
    image:
      repository: container-registry.oracle.com/middleware/oud_cpu
      tag: 14.1.2.1.0-jdk17-ol8-<YYMMDD>
      pullPolicy: IfNotPresent
    imagePullSecrets:
      - name: orclcred
    oudConfig:
     # memory, cpu parameters for both requests and limits for oud instances
      resources:
        limits:
          cpu: "1"
          memory: "8Gi"
        requests:
          cpu: "500m" 
          memory: "4Gi"
      rootUserPassword: <password>
      sampleData: "200"
    persistence:
      type: filesystem
      filesystem:
        hostPath:
          path: /nfs_volumes/oudpv/oud_user_projects
    cronJob:
      kubectlImage:
        repository: bitnami/kubectl
        tag: 1.30.3
        pullPolicy: IfNotPresent
     
      imagePullSecrets:
        - name: dockercred
    The following caveats exist:
    • Replace <password> with the relevant password.
    • sampleData: "200" will load 200 sample users into the default baseDN dc=example,dc=com. If you do not want sample data, remove this entry. If sampleData is set to 1,000,000 users or greater, then you must add the following entries to the yaml file to prevent inconsistencies in dsreplication:
      deploymentConfig:
        startupTime: 720
        period: 120
        timeout: 60
    • The <version> in kubectlImage: tag: should be set to the same version as your Kubernetes version (kubectl version). For example if your Kubernetes version is 1.30.3 set to 1.30.3.
    • If you are not using Oracle Container Registry or your own container registry for your OUD container image, then you can remove the following:
      imagePullSecrets:
        - name: orclcred
    • If your cluster does not have access to the internet to pull external images, such as bitnami/kubectl and busybox, you must load the images in a local container registry. You must then set the following:
      cronJob:
        kubectlImage:
          repository: container-registry.example.com/bitnami/kubectl
          tag: 1.30.3
          pullPolicy: IfNotPresent
      	   
      busybox:
        image: container-registry.example.com/busybox 
    • If using NFS for your persistent volume then change the persistence section as follows:

      Note:

      If you want to use NFS you should ensure that you have a default Kubernetes storage class defined for your environment that allows network storage. For more information on storage classes, see Storage Classes.
      persistence:
        type: networkstorage
        networkstorage:
          nfs: 
            path: <persistent_volume>/oud_user_projects
            server: <NFS IP address>
        # if true, it will create the storageclass. if value is false, please provide existing storage class (storageClass) to be used.
        storageClassCreate: true
        storageClass: oud-sc
        # if storageClassCreate is true, please provide the custom provisioner if any to use. If you do not have a custom provisioner, delete this line, and it will use the default class kubernetes.io/is-default-class.
        provisioner:  kubernetes.io/is-default-class
      The following caveats exist:
      • If you want to create your own storage class, set storageClassCreate: true. If storageClassCreate: true it is recommended to set storageClass to a value of your choice, and provisioner to the provisioner supported by your cloud vendor.
      • If you have an existing storageClass that supports network storage, set storageClassCreate: false and storageClass to the NAME value returned in “kubectl get storageclass”. The provisioner can be ignored.
    • If using Block Device storage for your persistent volume then change the persistence section as follows:

      Note:

      If you want to use block devices you should ensure that you have a default Kubernetes storage class defined for your environment that allows dynamic storage. Each vendor has its own storage provider but it may not be configured to provide dynamic storage allocation. For more information on storage classes, see Storage Classes.
      persistence:
        type: blockstorage
        # Specify Accessmode ReadWriteMany for NFS and for block ReadWriteOnce
        accessMode: ReadWriteOnce
        # if true, it will create the storageclass. if value is false, please provide existing storage class (storageClass) to be used.
        storageClassCreate: true
        storageClass: oud-sc
        # if storageClassCreate is true, please provide the custom provisioner if any to use or else it will use default.
        provisioner:  oracle.com/oci
      The following caveats exist:
      • If you want to create your own storage class, set storageClassCreate: true. If storageClassCreate: true it is recommended to set storageClass to a value of your choice, and provisioner to the provisioner supported by your cloud vendor.
      • If you have an existing storageClass that supports dynamic storage, set storageClassCreate: false and storageClass to the NAME value returned in “kubectl get storageclass”. The provisioner can be ignored.
    • For resources, limits, and requests, the example CPU and memory values shown are for development environments only. For Enterprise Deployments, please review the performance recommendations and sizing requirements in Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster.

      Note:

      Limits and requests for CPU resources are measured in CPU units. One CPU in Kubernetes is equivalent to 1 vCPU/Core for cloud providers, and 1 hyperthread on bare-metal Intel processors. An “m” suffix in a CPU attribute indicates ‘milli-CPU’, so 500m is 50% of a CPU. Memory can be expressed in various units, where one Mi is one IEC unit mega-byte (1024^2), and one Gi is one IEC unit giga-byte (1024^3). For more information, see Resource Management for Pods and Containers, Assign Memory Resources to Containers and Pods, and Assign CPU Resources to Containers and Pods

      Note:

      The parameters above are also utilized by the Kubernetes Horizontal Pod Autoscaler (HPA). For more details on HPA, see Kubernetes Horizontal Pod Autoscaler.
    • If you plan on integrating OUD with other Oracle components then you must specify the following under the oudConfig: section:
        integration: <Integration option>
      For example:
      oudConfig:
        etc...
        integration: <Integration option>
    • If you want to enable Assured Replication, see Enabling Assured Replication (Optional).
    • The examples given above are not an exhaustive list of all the parameters and environment variables that can be passed in the override yaml file. For more information, see Configuration Parameters for the oud-ds-rs Helm Chart and Environment Variables Used in the oud-ds-rs Helm Chart.
  3. Run the following command to deploy OUD:
    helm install --namespace <namespace> \
    --values oud-ds-rs-values-override.yaml \
    <release_name> oud-ds-rs
    For example:
    helm install --namespace oudns \
    --values oud-ds-rs-values-override.yaml \
    oud-ds-rs oud-ds-rs
    The output will be similar to that shown in Helm Command Output.
  4. Check the OUD deployment as per Verifying the OUD Deployment and Verifying OUD Assured Replication Status.