2 About the Unified Inventory and Topology Toolkit

This chapter describes the components required for Unified Inventory and Topology.

Unified Inventory and Topology Toolkit

From Oracle Software Delivery Cloud, download the following:

  • Oracle Communications Unified Inventory Management Cloud Native Toolkit
  • Oracle Communications Unified Inventory Management Cloud Native Image Builder
  • Oracle Communications Unified Inventory Management UTIA Image Builder
  • (Optional) Oracle Communications Unified Inventory Management OHS Image Builder
  • Oracle Communications Unified Inventory Management Common Toolkit

Perform the following tasks:

  1. Copy the above downloaded archives into directory workspace and unzip the archives.

  2. Export the unzipped path to the WORKSPACEDIR environment variable.

  3. On Oracle Linux, where Kubernetes is hosted, download and extract the tar archive on each host. This host has a connectivity to the Kubernetes cluster.

  4. Alternatively, on OKE, for an environment where Kubernetes is running, extract the contents of the tar archive (on each OKE client host). The OKE client host is the bastion host that is set up to communicate with the OKE cluster.

    $ mkdir workspace
    $ export WORKSPACEDIR=$(pwd)/workspace
    //Untar UIM Builder
    $ tar -xf $WORKSPACEDIR/uim-image-builder.tar.gz --directory workspace
    //Untar UIMCN Toolkit 
    tar -xf $WORKSPACEDIR/uim-cntk.tar.gz --directory workspace
    //Untar OHS Builder 
    tar -xf $WORKSPACEDIR/ohs-builder.tar.gz --directory workspace
    //Untar UTIA Builder
    $ tar -xf $WORKSPACEDIR/unified-topology-builder.tar.gz --directory workspace
    //Untar Common Toolkit
    $ tar -xf $WORKSPACEDIR/common-cntk.tar.gz --directory workspace
    $ export COMMON_CNTK=$WORKSPACEDIR/common-cntk

Assembling the Specifications

To assemble the specifications:

  1. Create a directory (either in local machine or version control system where the deployment pipelines are available) to maintain the specification files needed to deploy the service. Export the directory to SPEC_PATH environment variable.
  2. Copy the Strimzi Operator deployment specification file (strimzi-operator-override-values.yaml) to your $SPEC_PATH/<STRIMZI_PROJECT>.
    cp $COMMON_CNTK/samples/strimzi-operator-override-values.yaml $SPEC_PATH/<STRIMZI_PROJECT>/strimzi-operator-override-values.yaml
  3. Copy the Micro Services deployment application specification file (applications.yaml) to your $SPEC_PATH/<PROJECT>/<INSTANCE>.
    cp $COMMON_CNTK/samples/applications.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/applications.yaml
  4. Copy the Micro Services database specification file (database.yaml) to your $SPEC_PATH/<PROJECT>/<INSTANCE>.
    cp $COMMON_CNTK/samples/database.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/database.yaml
    cp $COMMON_CNTK/samples/appications-dev.yaml $SPEC_PATH/<PROJECT>/<INSTANCE>/applications-dev.yaml
  5. Copy other specification files as required:
    • Persistent volumes and persistent volume claims files from $COMMON_CNTK/samples/nfs
    • Role and role bindings from $COMMON_CNTK/samples/rbac
    • Credential files from $COMMON_CNTK/samples/credentials

Image Builders

The following image builders are required to build the corresponding services for an end-to-end integrated environment:

  • UIM Image Builder: This includes archive uim-image-builder.tar.gz, which is required to build UIM, UIM DB Installer Images. See Creating the UIM Cloud Native Images in UIM Cloud Native Deployment Guide for more information.
  • (Optional) OHS Builder: This includes ohs-builder.tar.gz, required to build OHS image. See "Building the OHS Image" for more information.
  • UTIA Builder: This includes unified-topology-builder.tar.gz, required to build Unified Topology API, Unified Topology UI, Unified Topology PGX, Unified Topology Consumer, and the Unified Topology DB Installer images.

All builder toolkits include manifest files and scripts to build the images.

About the Manifest File

A manifest file can be found in directory path $WORKSPACEDIR/<service-builder>/bin/<service>_manifest.yaml. The manifest file describes the input that goes into the service images. It is consumed by the image build process. The default configuration in the latest manifest file provides all necessary components for creating the service images easily. A service can be OHS, UTIA, or UIM.

You can also customize the manifest file. This enables you to:

  • Specify any Linux image as the base, as long as it is a binary and is compatible with Oracle Linux.
  • Upgrade the Oracle Enterprise Linux version to a newer version to uptake a quarterly CPU.
  • Upgrade the JDK version to a newer JDK version to uptake a quarterly CPU.
  • Choose a different userid and groupid for oracle:oracle user:group that the image specifies. The default is 1000:1000.

Note:

The schemaVersion and date parameters are maintained by Oracle. Do not modify these parameters. Version numbers provided here are only examples. The manifest file specifies the actual versions that Oracle recommends.

There are various sections in the manifest file such as:

  • Service Base Image: The Service Base image is a necessary building block of the final service container images. However, it is not required by the service to create or manage any service instances.

    Linux parameter: The Linux parameter specifies the base Linux image to be used as the base Docker or Podman image. The version is the two-digit version from /etc/redhat-release:

    linux:
        vendor: Oracle
        version: 8-slim
        image: <container>/os/oraclelinux:8-slim

    The vendor and the version details are used for validating while an image is being built and for querying at run-time.

    Note:

    To troubleshoot issues, Oracle support requires you to provide these details in the manifest file used to build the image.
  • The userGroup parameter that specifies the default userId and groupId:
    userGroup:
      username: <username>
      userid: <userID>
      groupname: <groupname>
      groupid: <groupID>
    
  • The jdk parameter that specifies the JDK vendor, version, and the staging path:
    jdk:
        vendor: Oracle
        version: <jdk_version>
        path: $CN_BUILDER_STAGING/downloads/java/jdk-<jdk_version>_linux-x64_bin.tar.gz
    
  • The Tomcat parameter specifies the Tomcat version and its staging path.

    Note:

    This is applicable only for the UTIA service.
    tomcat:
      version: <tomcat_version>
      path: $CN_BUILDER_STAGING/downloads/tomcat/tomcat-<tomcat_version>.tar.gz
    
  • A serviceImage parameter, where tag is the tag name of the service image.
    serviceImage:
      tag: latest
    

Note:

See UIM Compatibility Matrix for software versions.

Deployment Toolkits

The following toolkits are required to deploy the services for an end-to-end integrated environment:

  • UIM Cloud Native toolkit: Includes uim-cntk.tar.gz file that is required to deploy UIM in cloud native environment. See Creating a Basic UIM Cloud Native Instance in UIM Cloud Native Deployment Guide, for more information.
  • Common Cloud Native toolkit: Includes common-cntk.tar.gz file that is required to deploy the OAM (optional), UTIA, and Message Bus services in the cloud native environment.

Common Cloud Native Toolkit

The Common cloud native toolkit (Common CNTK) includes:

  • Helm charts to manage the UTIA, Common Authentication, and Message Bus services.
  • Scripts to manage secrets for the services.
  • Scripts to manage schemas for the services.
  • Scripts to create, update, and delete the UTIA and Message Bus services.
  • Scripts to create and delete the Common Authentication service.
  • Sample pv and pvc yaml files to create persistent volumes.
  • Sample charts to install Traefik.
  • Scripts to register and un-register the namespaces with Traefik and Strimzi operator.
  • The applications.yaml and, database.yaml files that provide the required configuration for the services which can be used for a production environment.
  • The applications-dev.yaml file that contains the required configuration for the services which can be used for a development environment.
  • The strimzi-operator-override-values.yaml file that enables you to override the configuration for deploying strimzi operator which is used for message bus service.

The applications.yaml and database.yaml files have common values that are applicable for all services in Common CNTK along with the values that are applicable for specific services.

For customized configurations to override the default values, update the values under the specific application sections in $SPEC_PATH/<PROJECT>/<INSTANCE>/applications.yaml.

While executing the scripts, the project and instance values should be provided, where project indicates the namespace of the Kubernetes environment where the service is deployed and instance is the identifier of the corresponding service instance, if multiples instances are created within the same namespace.

Note:

As multiple instances of Message Bus cannot exist in the same namespace, only one instance is created for all services within the same namespace.

While creating a basic instance for all these services, the project name is considered as sr and the instance name is considered as quick.

Note:

  • Project and Instance names must not contain any special characters.
  • There are common values specified in the applications.yaml and database.yaml files for the services. To override the common value user can specify that value under the chart name of a service. If the value under the chart is empty, then common value is considered.

Deploying the Services

You must deploy and configure all services in the following sequence:

  1. (Optional) Deploy Authentication Service (OAM along with OHS).

    Note:

    Authentication service is only needed for deployment if you do not have any Identity Provider that supports SAML 2.0 and OIDC or OAuth 2.0 protocols.
  2. Deploy Message Bus.
  3. Deploy UIM (traditional or cloud native).
  4. Configure Traditional UIM with Message Bus and UTIA, and restart UIM. See Setting System Properties in UIM System Administrator’s Guide, for more information.
  5. Configure OAM for UTIA client creation.
  6. Deploy UTIA.

Note:

Ensure that each individual service is deployed successfully and verified in the above mentioned order as there are dependencies between these services. Ensure that for production instance, for High Availability, the Message Bus is set up with at least 3 replicas for kafka-cluster.

Setting Up Prometheus and Grafana

Message Bus has been tested with Prometheus and Grafana server installed and configured using the Helm charts.

Setting Up Elastic Stack

To set up Elastic Stack:

  1. Install Elasticsearch and Kibana using the following commands:
    #Install elasticsearch and kibana . It might take time to download images from docker hub.
    kubectl apply -f $COMMON_CNTK/samples/charts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml
      
    #Check if services are running, append namespace if deployment is other than default like:- kubectl get services --all-namespaces
    kubectl get services
      
    Access kibana dashboard
     
    Method 1 - kubectl get svc ( will return all the services  , append namespace if deployment is other than default like:- kubectl get services --all-namespaces)
     
    Ex- elasticsearch    ClusterIP   10.96.190.99    <none>        9200/TCP,9300/TCP   113d
        kibana           NodePort    10.100.198.88   <none>        5601:31794/TCP      113d
     
    Kibana service nodeport at port 31794 is created
     
    Now access kibana dashboard using url - http://<IP address of VM>:<nodeport>/
  2. Run the following command to create a namespace ensuring that it does not already exist.
    kubectl get namespaces
    export FLUENTD_NS=fluentd
    kubectl create namespace $FLUENTD_NS
    
  3. Update $COMMON_CNTK/samples/charts/fluentd/values.yaml with Elastic Search Host and Port.
    elasticSearch:
      host: "elasticSearchHost"
      port: "elasticSearchPort"
    

    For example:

    elasticSearch:
      host: "elasticsearch.default.svc.cluster.local"
      port: "9200"
    
  4. Modify the Fluentd image resources if required.
    image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
       resources:
         limits:
         memory: 200Mi
       requests:
         cpu: 100m
         memory: 200Mi
    
  5. Run the following commands to install fluentd-logging using the $COMMON_CNTK/samples/charts/fluentd/values.yaml file in the samples:
    helm install fluentd-logging $COMMON_CNTK/samples/charts/fluentd -n $FLUENTD_NS --values $COMMON_CNTK/samples/charts/fluentd/values.yaml \
    --set namespace=$FLUENTD_NS \
    --atomic --timeout 800s
    
  6. Run the following command to upgrade fluentd-logging:
    helm upgrade fluentd-logging $COMMON_CNTK/samples/charts/fluentd -n $FLUENTD_NS --values $COMMON_CNTK/samples/charts/fluentd/values.yaml \
       --set namespace=$FLUENTD_NS \
       --atomic --timeout 800s
    
  7. Run the following command to uninstall fluentd-logging:
    helm delete fluentd-logging -n $FLUENTD_NS
  8. Use 'fluentd_logging-YYYY.MM.DD' (default index configuration) index pattern in Kibana to check the logs.

Visualize logs in Kibana

To visualize logs in Kibana:

  1. Navigate to Kibana dashboard (http://<IP address of VM>:<nodeport>/).
  2. Create Index pattern (fluentd_looging-YYYY.MM.DD).
  3. Click on Discover.

Setting Up OpenSearch

The Common CNTK has a sample that provides deployment instructions for OpenSearch on Kubernetes cluster using Helm charts. For more information, see https://opensearch.org/docs/latest/install-and-configure/install-opensearch/helm/

Create Kubernetes namespace to install OpenSearch and export it to the environment variable as follows:

Sample: export OPENSEARCH_NS=monitoring

Installing OpenSearch

Install OpenSearch as follows:

#Export the kubernetes namespace to be used for OpenSearch installation
export OPENSEARCH_NS=<kubernetes namespace>
export COMMON_CNTK=<path to common cntk>
 
#Install OpenSearch
helm install os-engine opensearch/opensearch --values=$COMMON_CNTK/samples/charts/opensearch/os_engine_values.yaml --namespace=$OPENSEARCH_NS
 
#Install OpenSearch Dashboard
helm install os-board opensearch/opensearch-dashboards --values=$COMMON_CNTK/samples/charts/opensearch/os_board_values.yaml --namespace=$OPENSEARCH_NS
 
#Accessing Dashboard
export NODE_PORT=$(kubectl get --namespace $OPENSEARCH_NS -o jsonpath="{.spec.ports[0].nodePort}" services os-board-opensearch-dashboards)
export NODE_IP=$(kubectl get nodes --namespace $OPENSEARCH_NS-o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

Installing FluentD

Update the $COMMON_CNTK/samples/charts/fluentd/template/fluentd-config-map.yaml file with OpenSearch details such as type, host, port, scheme, user, password, and ssl_verify as follows:

#Export the kubernetes namespace to be used for OpenSearch installation
helm install fluentd-logging $COMMON_CNTK/samples/charts/fluentd --values $COMMON_CNTK/samples/charts/fluentd/values.yaml --set namespace=$OPENSEARCH_NS --atomic --timeout 800s

Accessing OpenSearch Dashboard

Access the OpenSearch dashboard using nodeport of the OpenSearch dashboard service in the namespace. Find and create index pattern with fluentd_logging-*.

Uninstalling OpenSearch

Uninstall OpenSearch as follows:

helm uninstall os-board --namespace=$OPENSEARCH_NS
helm uninstall os-engine --namespace=$OPENSEARCH_NS
helm uninstall fluentd-logging --namespace=$OPENSEARCH_NS

Adding Common OAuth Secret and ConfigMap

To add COMMON OAUTH secret and ConfigMap:

  1. Run the following command to create or update truststore by passing Identity Provider SSL certificate:
    keytool -importcert -v -alias <param> -file <path to IDP cert file> -keystore <truststorename>.jks -storepass <password>

    A sample is as follows:

    keytool -importcert -v -alias idpcert -file identityprovidercert.pem -keystore truststore.jks -storepass ****

    Note:

    You must add the corresponding certificates for UIM and Identity Providers. If the Identity Provider and UIM certificates are not common, add both in the same truststore.
  2. Run the following script to create the OAuth configuration as secrets and ConfigMap:
    $COMMON_CNTK/scripts/manage-app-credentials.sh -p sr -i quick -f $SPEC_PATH/sr/quick/applications.yaml create oauthConfig

    Enter the values as prompted:

    Provide Oauth credentials  for   'sr-quick'   ...
    Client Id: topologyClient #Provide Client ID
    Client Secret: xxxxx #Provide Client Secret
    Client Scope: <oauth-client-scope>  (if scope is not configured for oidc-client keep blank)
    Client Audience: <oauth-client-audience> (if audience not configured for oidc-client keep blank)
    Token Endpoint Uri: https://<instance>.<project>.ohs.<oam-host-suffix>:<port>/oauth2/rest/token #Provide oauth token endpoint URI
    Valid Issue Uri: https:// <instance>.<project>.ohs .<oam-host-suffix>:<port>/oauth2 #Provide oauth valid issue URI
    Introspection Endpoint Uri: https:// <instance>.<project>.ohs .<oam-host-suffix>:<port> /oauth2/rest/token/introspect #Provide Oauth Introspection Endpoint URI
    JWKS Endpoint Uri: https://<instance>.<project>.ohs.<oam-host-suffix>:<port>/oauth2/rest/security #Provide JWKS Endpoint URI
    
    Provide Truststore details ...
    Certificate File Path (ex. oamcert.pem): ./commoncert.pem    #provide Certificate file path
    Truststore File Path (ex. truststore.jks): ./commontrust.jks   #provide Truststore file path
    Truststore Password: xxxx  #provide Truststore password

    Sample for IDCS is as follows:

    Provide Oauth credentials for 'sr-quick' ...
    Client Id: e6e0b2c6c3a845709bc51b561e0f008c 
    Client Secret: xxxx-xxxx-xxxx-xxxx
    Client Scope: https://quick.sr.topology.uim.org:30443/first_scope
    Client Audience: https://quick.sr.topology.uim.org:30443/
    Token Endpoint Uri: https://<IDCS URL>:443/oauth2/v1/token 
    Valid Issue Uri: https://identity.oraclecloud.com/
    Introspection Endpoint Uri: https://<IDCS URL>:443/oauth2/v1/introspect
    JWKS Endpoint Uri: https://<IDCS URL>:443/admin/v1/SigningCert/jwk
    Provide Truststore details ...
    Certificate File Path (ex. oamcert.pem): ./identity-pint-oc9qadev-com.pem  
    Truststore File Path (ex. truststore.jks): ./truststore.jks  
    Truststore Password: xxxxx  #provide Truststore password
  3. Verify the following:
    $kubectl get secret -n sr
    sr-quick-oauth-credentials
    
    $kubectl get cm -n sr
    sr-quick-oauth-config-cm

Note:

The oauthConfig secret is used by both messaging-bus and unified topology applications. If you are creating them in different namespaces or instances, you need to create this secret in both namespaces or instances.