2 About the Unified Inventory and Topology Toolkit

This chapter describes the components required for Unified Inventory and Topology.

Unified Inventory and Topology Toolkit

From Oracle Software Delivery Cloud, download the following:

  • Oracle Communications Unified Inventory Management Cloud Native Toolkit
  • Oracle Communications Unified Inventory Management Cloud Native Image Builder
  • Oracle Communications Unified Inventory Management ATA Image Builder
  • (Optional) Oracle Communications Unified Inventory Management OHS Image Builder
  • Oracle Communications Unified Inventory Management Common Toolkit
  • Oracle Communications Unified Inventory Management SmartSearch Image
  • Oracle Communications Unified Inventory Management Authorization Image Builder

Perform the following tasks:

  1. Copy the above downloaded archives into directory workspace and unzip the archives.

  2. Export the unzipped path to the WORKSPACEDIR environment variable.

  3. On Oracle Linux, where Kubernetes is hosted, download and extract the tar archive on each host. This host has a connectivity to the Kubernetes cluster.

  4. Alternatively, on OKE, for an environment where Kubernetes is running, extract the contents of the tar archive (on each OKE client host). The OKE client host is the bastion host that is set up to communicate with the OKE cluster.

    $ mkdir workspace
    $ export WORKSPACEDIR=$(pwd)/workspace
    //Untar UIM Builder
    $ tar -xf $WORKSPACEDIR/uim-image-builder.tar.gz --directory workspace
    //Untar UIMCN Toolkit 
    tar -xf $WORKSPACEDIR/uim-cntk.tar.gz --directory workspace
    //(optional) Untar OHS Builder, only if you are planning to install OAM otherwise not required 
    tar -xf $WORKSPACEDIR/ohs-builder.tar.gz --directory workspace
    //Untar ATA Builder
    $ tar -xf $WORKSPACEDIR/ata-builder.tar.gz --directory workspace
    //Untar Authorization Builder
    $tar -xf $WORKSPACEDIR/authorization-builder.tar.gz --directory workspace
    //Untar Common Toolkit
    $ tar -xf $WORKSPACEDIR/common-cntk.tar.gz --directory workspace
    $ export COMMON_CNTK=$WORKSPACEDIR/common-cntk
    $ export UIM_CNTK=$WORKSPACEDIR/uim-cntk
    

Assembling the Specifications

To assemble the specifications:

  1. Create a directory (either in local machine or version control system where the deployment pipelines are available) to maintain the specification files needed to deploy the service. Export the directory to SPEC_PATH environment variable.
  2. Copy the Strimzi Operator deployment specification file (strimzi-operator-override-values.yaml) to your $SPEC_PATH/<STRIMZI_PROJECT>. <STRIMZI_PROJECT> is the Kubernetes namespace in which Strimzi Operator is planned to deploy or install.
    cp $COMMON_CNTK/samples/strimzi-operator-override-values.yaml $SPEC_PATH/<STRIMZI_PROJECT>/strimzi-operator-override-values.yaml
  3. Copy the OpenSearch deployment specification files (os_board_values.yaml and os_engine_values.yaml) to your $SPEC_PATH/opensearch/ directory as follows:
    cp $COMMON_CNTK/samples/charts/opensearch/os_board_values.yaml $SPEC_PATH/opensearch/
    cp $COMMON_CNTK/samples/charts/opensearch/os_engine_values.yaml $SPEC_PATH/opensearch/
    #For development use the appications-dev.yaml for have minimal virtual resoruces
    cp $COMMON_CNTK/samples/applications-dev.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/applications-dev.yaml
  4. Copy the Micro Services deployment application specification file (applications.yaml) to your $SPEC_PATH/<PROJECT>/<INSTANCE>. <PROJECT> is the Kubernetes namespace where the services will be deployed/installed. The <INSTANCE> is the unique identifier provided for the services deployed. This will be added as one of the contacted string in the generate service name
    cp $COMMON_CNTK/samples/applications.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/applications.yaml
  5. Copy the Micro Services database specification file (database.yaml) to your $SPEC_PATH/<PROJECT>/<INSTANCE>.
    cp $COMMON_CNTK/samples/database.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/database.yaml
  6. Copy other specification files as required:
    • Persistent volumes and persistent volume claims files from $COMMON_CNTK/samples/nfs
    • Role and role bindings from $COMMON_CNTK/samples/rbac
    • Credential files from $COMMON_CNTK/samples/credentials
  7. Copy Common configuration file to $SPEC_PATH/$PROJECT/$INSTANCE/common/common-config.yaml
    cp $COMMON_CNTK/samples/credentials/common-confg.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/common/common-config.yaml

Image Builders

The following image builders are required to build the corresponding services for an end-to-end integrated environment:

  • UIM Image Builder: This includes archive uim-image-builder.tar.gz, which is required to build UIM, UIM DB Installer Images. See "Creating the UIM Cloud Native Images" in UIM Cloud Native Deployment Guide for more information.
  • (Optional) OHS Builder: This includes ohs-builder.tar.gz, required to build OHS image. See "Building the OHS Image" for more information.
  • Authorization Builder: This includes authorization-builder.tar.gz, required to build Authorization images. For more information, see "Creating Authorization Images".
  • ATA Builder: This includes ata-builder.tar.gz, required to build ATA API, ATA UI, ATA PGX, ATA Consumer, and the ATA DB Installer images.

All builder toolkits include manifest files and scripts to build the images.

About the Manifest File

A manifest file can be found in directory path $WORKSPACEDIR/<service-builder>/bin/<service>_manifest.yaml. The manifest file describes the input that goes into the service images. It is consumed by the image build process. The default configuration in the latest manifest file provides all necessary components for creating the service images easily. A service can be ATA, Authorization, SmartSearch, OpenSearch, UIM, or OHS.

You can also customize the manifest file. This enables you to:

  • Specify any Linux image as the base, as long as it is a binary and is compatible with Oracle Linux.
  • Upgrade the Oracle Enterprise Linux version to a newer version to uptake a quarterly CPU.
  • Upgrade the JDK version to a newer JDK version to uptake a quarterly CPU.
  • Choose a different userid and groupid for oracle:oracle user:group that the image specifies. The default is 1000:1000.

Note:

The schemaVersion and date parameters are maintained by Oracle. Do not modify these parameters. Version numbers provided here are only examples. The manifest file specifies the actual versions that Oracle recommends.

There are various sections in the manifest file such as:

  • Service Base Image: The Service Base image is a necessary building block of the final service container images. However, it is not required by the service to create or manage any service instances.

    Linux parameter: The Linux parameter specifies the base Linux image to be used as the base Docker or Podman image. The version is the two-digit version from /etc/redhat-release:

    linux:
        vendor: Oracle
        version: 8-slim
        image: <container>/os/oraclelinux:8-slim

    The vendor and the version details are used for validating while an image is being built and for querying at run-time.

    Note:

    To troubleshoot issues, Oracle support requires you to provide these details in the manifest file used to build the image.
  • The userGroup parameter that specifies the default userId and groupId:
    userGroup:
      username: <username>
      userid: <userID>
      groupname: <groupname>
      groupid: <groupID>
    
  • The jdk parameter that specifies the JDK vendor, version, and the staging path:
    jdk:
        vendor: Oracle
        version: <jdk_version>
        path: $CN_BUILDER_STAGING/downloads/java/jdk-<jdk_version>_linux-x64_bin.tar.gz
    
  • The Tomcat parameter specifies the Tomcat version and its staging path.

    Note:

    This is applicable only for the ATA service.
    tomcat:
      version: <tomcat_version>
      path: $CN_BUILDER_STAGING/downloads/tomcat/tomcat-<tomcat_version>.tar.gz
    
  • A serviceImage parameter, where tag is the tag name of the service image.
    serviceImage:
      tag: latest
    

Note:

See "UIM Software Compatibility" in UIM Compatibility Matrix for software versions.

Deployment Toolkits

The following toolkits are required to deploy the services for an end-to-end integrated environment:

  • UIM Cloud Native toolkit: Includes uim-cntk.tar.gz file that is required to deploy UIM in cloud native environment. See "Creating a Basic UIM Cloud Native Instance" in UIM Cloud Native Deployment Guide, for more information.
  • Common Cloud Native toolkit: Includes common-cntk.tar.gz file that is required to deploy the OAM (optional), Authorization, ATA, SmartSearch, OpenSearch, and Message Bus services in the cloud native environment.

Common Cloud Native Toolkit

The Common cloud native toolkit (Common CNTK) includes:

  • Helm charts to manage the ATA, Common Authentication (optional), Authorization, SmartSearch, OpenSearch, and Message Bus services.
  • Scripts to manage secrets for the services.
  • Scripts to manage schemas for the services.
  • Scripts to create, update, and delete the ATA and Message Bus services.
  • Scripts to create and delete the Common Authentication service.
  • Sample pv and pvc yaml files to create persistent volumes.
  • Sample charts to install Traefik.
  • Scripts to register and un-register the namespaces with Traefik and Strimzi operator.
  • The applications.yaml and, database.yaml files that provide the required configuration for the services which can be used for a production environment.
  • The applications-dev.yaml file that contains the required configuration for the services which can be used for a development environment.
  • The strimzi-operator-override-values.yaml file that enables you to override the configuration for deploying strimzi operator which is used for message bus service.

The applications.yaml and database.yaml files have common values that are applicable for all services in Common CNTK along with the values that are applicable for specific services.

For customized configurations to override the default values, update the values under the specific application sections in $SPEC_PATH/<PROJECT>/<INSTANCE>/applications.yaml.

While executing the scripts, the project and instance values should be provided, where project indicates the namespace of the Kubernetes environment where the service is deployed and instance is the identifier of the corresponding service instance, if multiples instances are created within the same namespace.

Note:

As multiple instances of Message Bus cannot exist in the same namespace, only one instance is created for all services within the same namespace.

While creating a basic instance for all these services, the project name is considered as sr and the instance name is considered as quick.

Note:

  • Project and Instance names must not contain any special characters.
  • There are common values specified in the applications.yaml and database.yaml files for the services. To override the common value user can specify that value under the chart name of a service. If the value under the chart is empty, then common value is considered.

Deploying the Services

You must deploy and configure all services in the following sequence:

  1. (Optional) Deploy Authentication Service (OAM along with OHS).

    Note:

    Authentication service is only needed for deployment if you do not have any Identity Provider that supports SAML 2.0 and OIDC or OAuth 2.0 protocols.
  2. Deploy Authorization service.
  3. Deploy Message Bus.
  4. Deploy OpenSearch.
  5. Deploy SmartSearch.
  6. Deploy UIM (traditional or cloud native).
  7. Configure Traditional UIM with Message Bus and ATA, and restart UIM. See "Setting System Properties" in UIM System Administrator’s Guide, for more information.
  8. (Optional) Configure OAM for ATA client creation.
  9. Deploy ATA.

Note:

Ensure that each individual service is deployed successfully and verified in the above mentioned order as there are dependencies between these services. Ensure that for production instance, for High Availability, the Message Bus is set up with at least 3 replicas for kafka-cluster.

Setting Up Prometheus and Grafana

Message Bus has been tested with Prometheus and Grafana server installed and configured using the Helm charts.