4 Installing the Offline Mediation Controller Cloud Native Deployment Package

Learn how to install the Oracle Communications Offline Mediation Controller cloud native deployment package on a cloud native environment.

About Deploying into Kubernetes

Helm is the recommended package manager for deploying Offline Mediation Controller cloud native services into Kubernetes. A Helm chart is a collection of files that describe a set of Kubernetes resources. It includes YAML template descriptors for all Kubernetes resources and a values.yaml file that provides default configuration values for the chart.

The Offline Mediation Controller cloud native deployment package includes oc-cn-ocomc-core-helm-chart-15.1.0.x.0.tgz.

When you install the Helm chart, it generates valid Kubernetes manifest files by replacing default values from values.yaml with custom values from override-values.yaml and creates Kubernetes resources. Helm calls this a new release. You use the release name to track and maintain this installation.

Automatically Pulling Images from Private Docker Registries

You can automatically pull images from your private Docker registry by creating an ImagePullSecrets, which contains a list of authorization tokens (or Secrets) for accessing a private Docker registry. You then add references to the ImagePullSecrets in your Offline Mediation Controller Helm chart's override-values.yaml file. This allows pods to submit the Secret to the private Docker registry whenever they want to pull images.

Automatically pulling images from a private Docker registry involves these high-level steps:

  1. Create a Secret outside of the Helm chart by entering this command:

    kubectl create secret docker-registry SecretName --docker-server=RegistryServer --docker-username=UserName --docker-password=Password -n NameSpace

    where:

    • SecretName is the name of your Kubernetes Secret.

    • RegistryServer is your private Docker registry's fully qualified domain name (FQDN) (repoHost:repoPort).

    • UserName and Password are your private Docker registry's user name and password.

    • NameSpace is the namespace you will use for installing the Offline Mediation Controller Helm chart.

    For example:

    kubectl create secret docker-registry my-docker-registry --docker-server=example.com:2660/ --docker-username=xyz --docker-password=password -n oms
  2. Add the imagePullSecrets key to your override-values.yaml file for oc-cn-ocomc-core:

    imagePullSecrets: SecretName
  3. Add the ocomc.imageRepository key to your override-values.yaml file:

    imageRepository: "RegistryServer"
  4. Deploy oc-cn-ocomc-core.

Automatically Rolling Deployments by Using Annotations

Whenever a ConfigMap entry or a Secret file is modified, you must restart its associated pod. This updates the container's configuration, but the application is notified about the configuration updates only if the pod's deployment specification has changed. Thus, a container could be using the new configuration, while the application keeps running with its old configuration.

You can configure a pod to automatically notify an application when a Container's configuration has changed. To do so, configure a pod to automatically update its deployment specification whenever a ConfigMap or Secret file changes by using the sha256sum function. Add an annotations section similar to this one to the pod's deployment specification:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}

For more information, see Chart Development Tips and Tricks in the Helm documentation (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).

About StatefulSet Implementation

You can implement StatefulSet deployment for Node Managers in Kubernetes. StatefulSets ensure that each pod has a stable and unique network identity and consistent storage, simplifying scaling through the Horizontal Pod Autoscaler. You can use the StatefulSet controller to create and delete pods and confirm that each new pod receives a consistent identity and associated resources. You can also customize the deployed StatefulSets to meet your specific requirements.

About Sidecars

Offline Mediation Controller cloud native uses Kubernetes sidecars to interact with the Offline Mediation Controller REST Services Manager. For more information about sidecars, see "Sidecar Containers" in the Kubernetes documentation.

Offline Mediation Controller cloud native deploys two types of sidecars:
  • Node Manager Sidecar

  • Admin Server Sidecar

About the Node Manager Sidecar

You can use Node Manager sidecars to perform tasks related to the Node Managers.
  • If the Node Manager is running, you can register it with the Administration Server. This ensures that the Node Manager can immediately participate in cluster activities.

  • If the replication function is enabled, you can manage the replication for new Node Managers. New Node Managers replicate the node chain and inherit necessary state and data from the parent Node Manager through this replication process.

  • You can persist the registration and replication process of the new Node Managers to preserve them across pod restarts. This ensures that after a pod restart, the sidecar can resume its operations accurately.

About the Admin Server Sidecar

You can use admin server sidecars to perform tasks related to the Admin server.
  • If the import on install function is enabled, you can check whether all Node Managers involved in the import process are available and registered. If a Node Manager is unavailable, the sidecar waits until it is available. The sidecar runs the upload and imports APIs after all the Node Managers are available. It also continues to monitor the import process until it completes successfully.

  • In the event of a failure during the import process, you can diagnose and resolve issues. The sidecar handles error scenarios and marks the import as a dangling request.

  • You can persist the states of the operations to preserve them across pod restarts. This ensures that after a pod restart, the sidecar can resume its operations accurately.

Configuring Sidecars

You can configure debugging and analysis of applications using the following entries in the respective ConfigMap files:
  • SIDE_CAR_INTERVAL: You can define the frequency at which the sidecar should handle its closed-loop operations. The value is in milliseconds, and the default value is 10000.

  • SIDECAR_NODE_MANAGER_AUTO_REGISTRATION_DISABLE: You can define whether to register a Node Manager with the administration server on startup. The value can be either TRUE or FALSE. The default value is set to FALSE.

Note:

The names of the ConfigMap files depend on the names of the Node Manager sets which are set using the ocomcCore.ocomc.nodeManagerConfigurations.sets key.

About Data Persistent Volume (PV) Configuration

You can control the sharing of the data Persistent Volume (PV) across Node Managers with more granularity using the ocomcCore.ocomc.nodeManagerConfigurations.storage.data.scope key. You can set the scope key to one of these values:
  • Application: The data PV is shared across all applications. All sets and their pods share the same data PV.

  • Set: Each set has a dedicated data PV that is shared across only the pods within that set. The data PV is isolated for each set, meaning no two pods from different sets can access the same data PV.

  • Pod: Each pod has a dedicated data PV. It provides data PV isolation between each pod, regardless of the sets that they belong to.

Offline Mediation Controller Persistent Volume Claim Configuration

Table 4-1 lists the Persistent Volume Claims (PVCs) used by the Offline Mediation Controller server.

Table 4-1 List of PVCs in Offline Mediation Controller Server

PVC Name Default Pod Internal File System

pvc-vol-install-admin-server

home/ocomcuser/install

pvc-vol-install-SET_NAME-SET_ORDINAL_INDEX

For example, pvc-vol-install-nm-cc-0

home/ocomcuser/install

pvc-vol-keystore

home/ocomcuser/keystore

pvc-vol-suspense (Optional PVC)

home/ocomcuser/suspense

  • pvc-vol-data (Application scoped)

  • pvc-SET_NAME-vol-data (Set scoped)

    For example, pvc-nm-cc-vol-data

  • pvc-SET_NAME-vol-data-SET_NAME-SET_ORDINAL_INDEX (Pod scoped)

    For example, pvc-nm-cc-vol-data-nm-cc-0

home/ocomcuser/data

pvc-vol-external

home/ocomcuser/external

pvc-vol-backup

Note: The pvc-vol-backup is only created when the ocomc.storage.backup.enabled attribute is set to true.

home/ocomcuser/backup

To share these PVCs between Offline Mediation Controller pods, you must use a persistent volume provisioner that:

  • Provides ReadWriteMany access and sharing between the pods

  • Mounts all external volumes with a user (ocomcuser) that has a UID and GID of 1000 and that has full permissions

  • Has its volume reclaim policy set to avoid data and configuration loss in a mounted file system

  • Is configured to share data, external KeyStore volumes, and wallets between Offline Mediation Controller pods and the Administration Client

You must place all CDR files inside the vol-data PVC and then configure the internal file system path of the vol-data PVC in your Administration Client. The ocomcuser user must have read and write permissions for all CDRs.

You must place all necessary third-party and cartridge JAR files in home/ocomcuser/external/3rd_Party and home/ocomcuser/external/cartridges directories inside the vol-external PVC, and then restart the pods. After the PVC is mounted, the JAR files are copied to home/ocomcuser/install/ocomc/3rd_Party and home/ocomcuser/install/ocomc/cartridges.

The Offline Mediation Controller wallet files will be created and used through the shared vol-keystore PVC.

The Node Managers can be deployed in the specific Kubernetes node by setting the affinity in the values.yaml file.

Configuring Offline Mediation Controller Services

The Offline Mediation Controller unified Helm chart (oc-cn-ocomc-core-helm-chart) configures and deploys all of your product services. YAML descriptors in the oc-cn-ocomc/templates directory use the oc-cn-ocomc/values.yaml file for most of the values. You can override the values by creating an override-values.yaml file.

The unified Helm chart includes both Offline Mediation Controller Core and REST Services Manager under a single Helm chart. It contains both Core and REST Services Manager Helm charts as subcharts within it. You can use the following keys to toggle deployment between Offline Mediation Controller Core or REST Services Manager by setting their values to either true or false:

  • Use charts.enableCore to enable Offline Mediation Controller Core.
  • Use charts.enableRSM to enable Offline Mediation Controller REST Services Manager.

Table 4-2 lists the keys that directly impact Offline Mediation Controller services. Add these keys to your override-values.yaml file with the same path hierarchy.

Note:

  • If you are using a Windows-based client, the adminsvrIp, nmExternalPort, adminsvrExternalPort, and adminsvrFirewallPort keys must be set. To connect with the Windows-based client, use external services with a NodePort type. In this case, the adminsvrIp will be the worker node IP. Restart the pod after setting adminsvrIp.

  • If graphical desktop support such as VNC is available on a worker node, the client can be installed on the same worker node in which Administration Server and Node Manager pods are running. In this case, set the service type to ClusterIP and do not set the nmExternalPort, adminsvrExternalPort, and adminsvrFirewallPort keys.

Table 4-2 Offline Mediation Controller Server Keys

Key Path in values.yaml File Description

enableCustomizationFileUpload

global

Whether to allow custom file uploads during imports (true) or not (false). The default value is false.

enableTestNodeChain

global

Whether node chain testing is enabled (true) or not (false). The default value is false.

RSMcontainer.imageRepository

global

The repository where the RSM image needs to be pulled from.

RSMcontainer.imagePullPolicy

global

The image pull policy for the RSM image.

RSMcontainer.image

global

The name of the RSM image.

runMigrationDataJob

global.statefulSetUpgrade

Whether to initiate a job for migrating data from an older setup to a new 15.1 setup. The default value is false.

payloadFilePath

global.statefulSetUpgrade

The path to the payload file used to migrate data from an older setup to a new 15.1 setup.

restartCount

ocomcCore.ocomc

Increment the current value by 1 to trigger a restart of all Offline Mediation Controller components. The starting value is 0.

sslEnabled

ocomcCore.ocomc

Whether to enable SSL for secure communication between components (true) or not (false). The default value is true.

forceGenSslcert

ocomcCore.ocomc

Whether to regenerate the SSL certificates for the Administration Server and Node Manager (true) or not (false). The default value is false.

upgradeEnabled

ocomcCore.ocomc

Set to true when using a new version of the Offline Mediation Controller image with an existing installation to trigger the upgrade process. The default value is false.

rsmURL

ocomcCore.ocomc

The URL of the Offline Mediation Controller REST Services Manager for integration. The default values is http://ocomc-rsm:8080.

cartrdigeFolder

ocomcCore.ocomc

The directory where Offline Mediation Controller cartridge packs are installed.

Set this key to /home/ocomcuser/ext/cartridges unless you are creating custom images.

storageClass

ocomcCore.ocomc.storage

The Kubernetes storage class for persistent volumes.

keytore.name

ocomcCore.ocomc.storage

The name of the KeyStore volume used for storing sensitive credentials. The default value is keystore-vol.

external.name

ocomcCore.ocomc.storage

The name of the external volume used for additional storage. The default value is external-vol.

external.capacity

ocomcCore.ocomc.storage

The capacity of the external volume.

backup.enabled

ocomcCore.ocomc.storage

Whether to create a backup PV (true) or not (false).

backup.name

ocomc.Core.ocomc.sorage

The name of the backup PV.

backup.accessModes

ocomcCore.ocomc.storage

The permission access mode of the backup PV. The default value is ReadWriteMany.

backup.capacity

ocomcCore.ocomc.storage

The capacity of the backup PV.

fsGroup

ocomcCore.ocomc.securityContext

The file system group ID for security contexts.

runAsUser

ocomcCore.ocomc.securityContext

The user ID under which the process runs.

runAsGroup

ocomcCore.ocomc.securityContext

The group ID under which the process runs.

enabled

ocomcCore.ocomc.authentication

Whether to enable authentication for accessing system resources (true) or not (false).

uniPass

ocomcCore.ocomc.secrets

Use this key to apply a uniform password to all Offline Mediation Controller cloud native services, including:
  • Database Schemas
  • Offline Mediation Controller Root Login
  • Oracle Wallets
To override this password for a specific service, specify a different password in the service's key.

Note: Use this key for test or demonstration systems only.

walletPass

ocomcCore.ocomc.secrets

The string password for opening the wallet.

nmKeypass

ocomcCore.ocomc.secrets

The password for the Node Manager domain SSL identity key.

nmKeystorepass

ocomcCore.ocomc.secrets

The Offline Mediation Controller Secrets required for SSL and installation.

adminKeypass

ocomcCore.ocomc.secrets

The password for the Administration Server domain SSL Identity Key.

adminKeystorepass

ocomcCore.ocomc.secrets

The password for the Administration Server domain SSL Identity Store.

rsmOAuthToken

ocomcCore.ocomc.secrets

The access token used by Administration Server to communicate with the REST Services Manager when it is running with security enabled.

image.pullPolicy

ocomcCore.ocomc.adminServerConfigurations

The pull policy of the Administration Server container image.

image.pullSecret

ocomcCore.ocomc.adminServerConfigurations

The location of your imagePullSecrets, which stores the credentials (or Secret) for accessing your private Docker registry.

image.repository

ocomcCore.ocomc.adminServerConfigurations

The repository location for the Administration Server container image.

image.name

ocomcCore.ocomc.adminServerConfigurations

The name of your Administration Server container image.

restartCount

ocomcCore.ocomc.adminServerConfigurations

Increment the current value by 1 to trigger a restart of the Administration Server. The starting value is 0.

log.level

ocomcCore.ocomc.adminServerConfigurations

The logging level for the Administration Server. There are three possible levels:

  • INFO
  • DEBUG
  • WARN

The default value is INFO.

log.pattern

ocomcCore.ocomc.adminServerConfigurations

The pattern in which log messages are generated.

clientTimeout

ocomcCore.ocomc.adminServerConfigurations

The time to wait for Kubernetes commands to complete.

type

ocomcCore.ocomc.adminServerConfigurations.service

The service type: ClusterIP, NodePort, or LoadBalancer.

appPort

ocomcCore.ocomc.adminServerConfigurations.service

The application port for the Administration Server.

firewallPort

ocomcCore.ocomc.adminServerConfigurations.service

The firewall port for the Administration Server.

callbackPort

ocomcCore.ocomc.adminServerConfigurations.service

The callback port for the Administration Server.

name

ocomcCore.ocomc.adminServerConfigurations.storage.install

The name of the install volume used for the Administration Server installation.

capacity

ocomcCore.ocomc.adminServerConfigurations.storage.install

The storage capacity allocated for the Administration Server install volume, such as 1Gi.

enabled

ocomcCore.ocomc.adminServerConfigurations.import

Whether to enable the feature that triggers import on initial setup of Offline Mediation Controller through the REST Services Manager.

import.mappingFile

ocomcCore.ocomc.adminServerConfigurations.import

The path to the mapping file for import, if enabled.

gcOptions

ocomcCore.ocomc.adminServerConfigurations

The garbage control (GC) options for the Administration server.

memoryOptions

ocomcCore.ocomc.adminServerConfigurations

The memory-related options to pass to the Administration Server process.

eceIntegration.*

ocomcCore.ocomc.nodeManagerConfiguartions

The details for connecting to ECE. Add these keys only if you are integrating Offline Mediation Controller with ECE:

  • enabled: Specifies that integration with ECE is enabled.

  • image.repository: The Docker registry URL for the ECE image.
  • image.name: The name of the ECE image.
  • image.pull.Policy: The pull policy of the ECE image. The default value is IfNotPresent, which specifies not to pull the image if it's already present. Applicable values are IfNotPresent and Always.
  • clusterName: The ECE cluster name. The default is BRM.
  • persistenceEnabled: Whether ECE will persist its cache data in the Oracle database: true or false. The default is false.
  • coherenceClusterPort: The value indicating the Coherence port used by the ECE component.

data.name

ocomcCore.ocomc.nodeManagerConfigurations.storage

The name of the volume for data storage.

data.accessModes

ocomcCore.ocomc.nodeManagerConfigurations.storage

The permission access mode of the data PV.

data.capacity

ocomcCore.ocomc.nodeManagerConfigurations.storage

The capacity of the volume.

data.scope

ocomcCore.ocomc.nodeManagerConfigurations.storage

The scope of the volume. Possible values are:
  • Application: Only one data PV would be deployed and would be shared by all the Node Manager Pod.

  • Set: Each Node Manager set would have their dedicated PV (all the pods in the set would share the same data PV).

  • Pod: Each Node Manager pod will get a dedicated data PV.

replication.enabled

ocomcCore.ocomc.nodeManagerConfigurations.scaling

Whether to enable auto-replication of Node Manager pods upon scaling (true) or not (false).

createServiceAccount

ocomcCore.ocomc.nodeManagerConfigurations.scaling.hpa.serviceAccount

Whether to create a service account (true) or not (false).

serviceAccount.name

ocomcCore.ocomc.nodeManagerConfigurations.scaling.hpa

The service account to be used by Offline Mediation Controller. If the service account does not exist, set the createServiceAccount key to true.

serviceAccount.enabled

ocomcCore.ocomc.nodeManagerConfigurations.scaling.hpa

Whether to enable the Kubernetes Horizontal Pod Autoscaler (HPA) for dynamic scaling of the Node Manager.

hpaScaleDownEnabled

ocomcCore.ocomc.nodeManagerConfigurations.scaling.hpa

Whether to allow HPA to scale down pods when the relevant metrics fall below the specified threshold (true) or not (false).

restartCount

ocomcCore.ocomc.nodeManagerConfigurations

Increment the current value by 1 to trigger a restart of all Node Manager components. The starting value is ­0.

level

ocomcCore.ocomc.nodeManagerConfigurations.log

The logging level for Node Managers. There are three possible levels:

  • INFO
  • DEBUG
  • WARN

The default value is INFO.

jmxEnabled

ocomcCore.ocomc.nodeManagerConfigurations

Whether to enable JMX monitoring for Node Manager diagnostics (true) or not (false).

jmxPort

ocomcCore.ocomc.nodeManagerConfigurations

The port used for JMX monitoring connections.

cpu

ocomcCore.ocomc.nodeManagerConfigurations.resources.requests

The minimum CPU resources allocated for Node Manager pods.

memory

ocomcCore.ocomc.nodeManagerConfigurations.resources.requests

The minimum memory allocated for Node Manager pods.

cpu

ocomcCore.ocomc.nodeManagerConfigurations.resources.limits

The maximum CPU resources for Node Manager pods.

memory

ocomcCore.ocomc.nodeManagerConfigurations.resources.limits

The maximum memory limit for Node Manager pods.

serviceMonitor.enabled

ocomcCore.ocomc.nodeManagerConfigurations

Enable or disable service monitor being deployed.

serviceMonitor.interval

ocomcCore.ocomc.nodeManagerConfigurations

The interval for service monitoring scraping.

serviceMonitor.labels.app

ocomcCore.ocomc.nodeManagerConfigurations

The app label to be added for service monitor.

serviceMonitor.labels.release

ocomcCore.ocomc.nodeManagerConfigurations

The release label to be added for service monitor.

metrics.enabled

ocomcCore.ocomc.nodeManagerConfigurations

Enable or disable metrics.

suspenseManagementIntegration*

ocomcCore.ocomc.nodemanagerConfigurations

The details for integrating to suspense management. Add these keys only if you are integrating Offline Mediation Controller with suspense management:
  • enabled: Whether to enable or disable suspense management integration.

  • createPV: Whether to enable or disable PV creation for Suspense Management. This determines if Offline Mediation Controller should use an existing shared suspense PV.

  • storage.suspense.name: The name of the volume for suspense storage.

  • storage.suspense.accessModes: The access modes for the suspense storage volume.

  • storage.suspense.capacity: The storage capacity allocated for the suspense volume.

external.name

ocomcCore.ocomc.nodeManagerConfigurations.storage

The name of the volume for external storage.

external.accessModes

ocomcCore.ocomc.nodeManagerConfigurations.storage

The access modes for the external storage.

external.capacity

ocomcCore.ocomc.nodeManagerConfigurations.storage

The storage capacity allocated for the external volume.

jvmOpts

ocomcCore.ocomc.nodeManagerConfigurations

The JVM options for the Node Manager.

affinity

ocomcCore.ocomc.nodeManagerConfigurations

The Node Manager affinity rules for pod scheduling.

sets

ocomcCore.ocomc.nodeManagerConfigurations

The various Node Manager sets to be deployed. Each set would have a dedicated StatefulSet.

type

ocomcCore.ocomc.nodeManagerConfigurations.service

The type of Kubernetes service used to expose the Node Manager.

port

ocomcCore.ocomc.nodeManagerConfigurations.service

The port number exposed by the Node Manager service inside the cluster.

nodePort

ocomcCore.ocomc.nodeManagerConfigurations.service

The NodePort value for exposing the Node Manager service externally. Applies only if service.type is set to NodePort.

rdm.threadCount

ocomcCore.ocomc.nodeManagerConfigurations.service

The number of RDM threads for the Node Manager.

Deploying Offline Mediation Controller Services

To deploy Offline Mediation Controller services on your cloud native environment, do this:

Note:

To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must use the same namespace.

  1. Validate the content of your charts by entering this command from the helmcharts directory:

    helm lint --strict oc-cn-ocomc-core-helm-chart

    You'll see this if the command completes successfully:

    1 chart(s) linted, no failures
  2. Run the helm install command from the helmcharts directory:

    helm install ReleaseName oc-cn-ocomc-core-helm-chart --namespace NameSpace --values OverrideValuesFile

    where:

    • ReleaseName is the release name, which is used to track this installation instance.

    • NameSpace is the namespace in which to create Offline Mediation Controller Kubernetes objects. To integrate the Offline Mediation Controller cloud native deployment with the ECE and BRM cloud native deployments, they must use the same namespace.

    • OverrideValuesFile is the path to a YAML file that overrides the default configurations in the chart's values.yaml file.

    For example, if the override-values.yaml file is in the helmcharts directory, the command for installing Offline Mediation Controller cloud native services would be:

    helm install ocomc oc-cn-ocomc-core-helm-chart --namespace ocgbu --values override-values.yaml

Installing the Offline Mediation Controller Web-Based UI

Offline Mediation Designer is a web-based UI that runs on top of Offline Mediation Controller. You can use it to create, design, and manage nodes, node chains, and Node Managers within mediation processes.

Prerequisites

Before deploying the Offline Mediation Designer UI, you must first install the following software.

About Installing an Ingress Controller

You use ingress controllers to expose services outside the Kubernetes cluster, enabling clients to communicate with Offline Mediation Controller cloud native. Ingress controllers route external traffic to services within the Kubernetes cluster using the rules you define.

The Offline Mediation Controller cloud native deployment package includes a sample NGINX Ingress Controller (oc-cn-ocomc-nginx-ingress-controller-sample-helm-chart-15.1.0.x.0.tgz) that you can install and configure for the Offline Mediation Designer UI. The archive file includes a Helm chart and a README file explaining how to configure the NGINX Controller for your system.

For information about NGINX Ingress Controller, see the NGINX documentation: https://docs.nginx.com/nginx-ingress-controller/.

About Installing the Relying Party

Relying Party applications authenticate users by working with a trusted Identity Provider, such as Oracle Identity Cloud Service (IDCS). The relying party delegates user authentication to the identity provider, which can be an OpenID Connect provider, a Security Assertion Markup Language (SAML) identity provider, or any other authentication service.

The Offline Mediation Controller cloud native deployment package includes a sample Apache Relying Party (oc-cn-ocomc-apache-relying-party-sample-helm-chart-15.1.0.x.0.tgz) that you can install and configure for the Offline Mediation Designer UI. The archive file includes a Helm chart and a README file explaining how to configure the software for your system.

About the Offline Mediation Designer UI Helm Chart

The Offline Mediation Controller cloud native deployment package includes the oc-cn-ocomc-mediation-ui-helm-chart-15.1.0.x.0.tgz file. It is a Helm chart archive used for deploying the Offline Mediation Designer UI on a Kubernetes cluster. Extract the Helm chart and files from the archive by entering this command:
tar zxvf oc-cn-ocomc-mediation-ui-helm-chart-15.1.0.x.0.tgz
The following files and directories are extracted:
profiles/
profiles/client-side-auth-idcs.yaml
profiles/client-side-auth-oam.yaml
profiles/deploy-oci.yaml
profiles/relying-party.yaml
mediation-ui-charts.tgz
The profiles directory contains these sample YAML files that you can copy and modify to meet your configuration requirements:
  • replying-party.yaml: Use this file for deploying the Offline Mediation Designer UI with client-side authentication disabled, meaning that the UI sits behind a relying party.

  • client-side-auth-idcs.yaml: Use this file as a reference for deploying the Offline Mediation Designer UI with client-side authorization enabled and the API secured by IDCS.

  • client-side-auth-oam.yaml: Use this file as a reference for deploying the Offline Mediation Designer UI with client-side authorization enabled and the API secured by Oracle Access Management.

Table 4-3 lists the keys that impact Offline Mediation Designer UI referenced in the above YAML files.

Table 4-3 List of UI keys

Key Description

security.clientSideAuthEnabled

Controls whether client-side authentication is enabled (true) or not (false). Set it to false if the Offline Mediation Designer UI is deployed in conjunction with a relying party.

Note: When set to false, it is not necessary to set the authorizationUrl, authorizationEndpoint, clientId, scope, redirectUri, and postLogoutRedirectUri keys. This configuration is instead managed within the relying party service.

security.authorizationURL

(Only used when security.clientSideAuthEnabled is set to true)

The URL of the IdP (Identity Provider). Different IdPs have different values for the URL:
  • For Oracle Access Management: https://OAMHostname:Port/oauth2/rest
  • For IDCS: https://IDCSidentifier/identity.oraclecloud.com/oauth2/v1

  • For other IdPs: http://hostname:port/realms/Realm/protocol/openid-connect

security.authorizatonEndpoint

(Only used when security.clientSideAuthEnabled is set to true)

The name of the endpoint for initiating the authorization flow, which is added to the URL specified in authorizationURL. For Oracle Access Management and IDCS, the value is authorized. Other IDPs may have different values, such as auth.

During the authorization flow, a POST call is made to https://IDCSidentifier.identity.oraclecloud.com/oauth2/v1/authorize for IDCS and https://OAMHostname:Port/oauth2/rest /authorize for Oracle Access Management.

security.logoutEndpoint

The value for the logout endpoint to initiate the logout process. Typically, this is the user logout endpoint configured in the IDP.

security.clientId

(Only used when security.clientSideAuthEnabled is set to true)

The unique identifier of the client application requesting authorization. This must match the value of the client created in the IDP.

security.scope

(Only used when security.clientSideAuthEnabled is set to true)

The permissions being requested.

security.redirectUri

(Only used when security.clientSideAuthEnabled is set to true)

The URI where the user is redirected after authorization.

Note: The redirectUri key must match one of the values for the redirectURIs in the client created in the IDP. Typically, this is the URL of the mediation UI.

security.postLogoutRedirectUri

(Only used when security.clientSideAuthEnabled is set to true)

The URI where the user is redirected after log out.

security.mediationUri

The URL of the Offline Mediation Controller API service. This is the URL that the UI will use to make call to the API so it must be accessible through the browser. Typically, this should point to the ingress controller URL as all calls from the UI should be made through the ingress controller and get forwarded accordingly.

Before proceeding to deploy the web-based UI, you must do the following steps:
  1. Make a copy of the appropriate YAML file you wish to use and update it according to your configuration requirements. For example, if you want to use the relying-party.yaml file, run the following command:
    cp profiles/relying-party.yaml my-custom-profile.yaml
  2. Make a copy of the deployment configuration file using the following command:
    cp profiles/deploy-oci.yaml deploy-mediation-ui.yaml 
  3. If you are using a private registry, update the deploy-mediation-ui.yaml file with image registry and secret details. For example:
    image:
      repository: my-docker-registry
    
    imagePullSecret:
      imagePullSecrets:
        - name: my-docker-secret
    
    service:
      type: NodePort
      nodePort: 31503

Deploying the Offline Mediation Designer UI

To deploy the Offline Mediation Designer UI in your cloud native environment, do the following:

  1. Validate the content of your charts by entering this command from the helmcharts directory:
    helm lint --strict oc-cn-ocomc
  2. Run the helm install command from the helmcharts directory:
    helm -n namespace install mediation-ui mediation-ui-charts.tgz -f deploy-mediation-ui.yaml -f mediation-ui-values.yaml

    where namespace is the namespace in which to create the Offline Mediation Controller Kubernetes objects.

Afterward, you can access the Offline Mediation Designer UI at the following URL:

https://hostname/webApps/mediation/

where hostname is the host name of the configured ingress controller deployment.