8 Exploring Alternate Configuration Options
The UIM cloud native toolkit provides samples and documentation for setting up your UIM cloud native environment using standard configuration options. However, you can choose to explore alternate configuration options for setting up your environment, based on your requirements. This chapter describes alternate configurations you can explore, allowing you to decide how best to configure your UIM cloud native environment to suit your needs.
You can choose alternate configuration options for the following:
- Setting Up Authentication
- Working with Shapes
- Choosing Worker Nodes for Running UIM Cloud Native
- Working with Ingress, Ingress Controller, and External Load Balancer
- Using an Alternate Ingress Controller
- Reusing the Database State
- Setting Up Persistent Storage
- Managing Logs
- Managing UIM Cloud Native Metrics
The sections that follow provide instructions for working with these configuration options.
Setting Up Authentication
By default, UIM uses the WebLogic embedded LDAP as the authentication provider. The UIM cartridge deployment users and application administrative users are created in embedded LDAP during instance creation. For human users, you may set up an optional authentication for the users who access UIM through user interfaces. See "Planning and Validating Your Cloud Environment" for information on the components that are required for setting up your cloud environment. The UIM cloud native toolkit provides samples that you use to integrate components such as OpenLDAP, WebLogic Kubernetes Operator (WKO), and Traefik. This section describes the tasks you must do for configuring optional authentication for UIM cloud native human users.
Perform the following tasks using the samples provided with the UIM cloud native toolkit:
- Install and configure OpenLDAP. This is required to be done once for your organization.
- Install OpenLDAP clients. This is required to be performed on each host that installs and runs the toolkit scripts and when a Kubernetes cluster is shared by multiple hosts.
- In the OpenLDAP server, create the root node for each UIM instance.
Installing and Configuring OpenLDAP
OpenLDAP enables your organization to handle authentication for all instances of UIM. You install and configure OpenLDAP once for your organization.
To install and configure OpenLDAP:
- Run the following command, which installs OpenLDAP:
$ sudo -s yum -y install "openldap" "migrationtools"
- Specify a password by running the following command:
$ sudo -s slappasswd New password: Re-enter new password:
- Configure OpenLDAP by running the following
commands:
$ sudo -s $ cd /etc/openldap/slapd.d/cn=config $ vi olcDatabase\=\{2\}hdb.ldif
- Update the values for the following parameters:
Note:
Ignore the warning about editing the file manually.olcSuffix: dc=uimcn-ldap,dc=com
olcRootDN: cn=Manager,dc=uimcn-ldap,dc=com
olcRootPW:
sshawhere ssha is the SSHA that is generated
- Update the dc values for the olcAccess parameter as
follows:
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external, cn=auth" read by dn.base="cn=Manager,dc=uimcn-ldap,dc=com" read by * none
- Test the configuration by running the following command:
sudo -s slaptest -u
Ignore the checksum warnings in the output and ensure that you get a success message at the end.
- Run the following commands, which restart and enable
LDAP:
sudo -s systemctl restart slapd sudo -s systemctl enable slapd sudo -s cp -rf /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
- Create a root node named domain, which will be the top parent for all UIM instances.
- Run the following command to create a new file named
base.ldif:
sudo -s vi /root/base.ldif
- Add the following entries to the base.ldif
file:
dn: ou=Domains,dc=uimcn-ldap,dc=com objectClass: top objectClass: organizationalUnit ou: Domains
- Run the following commands to update the values in the base.ldif
file:
ldapadd -x -W -D "cn=Manager,dc=uimcn-ldap,dc=com" -f /root/base.ldif ldapsearch -x cn=Manager -b dc=uimcn-ldap,dc=com
- Open the LDAP port 389 on all Kubernetes nodes in the cluster.
Installing OpenLDAP Clients
In environments where the Kubernetes cluster is shared by multiple hosts, you must install the OpenLDAP clients on each host. You use the scripts in the toolkit to populate the LDAP server with users and groups.
sudo -s yum -y install openldap-clients
Creating the Root Node
You must create the root node for each UIM instance before additional UIM non-automation user and UIM group can be created.
The toolkit provides a sample script ($UIM_CNTK/samples/credentials/managed-uim-ldap-credentials.sh) that you can use to create the root node in the LDAP tree for the UIM instance.
Run the $UIM_CNTK/samples/credentials/managed-uim-ldap-credentials.sh script by passing in -o account.
Enabling SAML Based Authentication Provider
The UIM Cloud Native Tool Kit provides support for SAML-based authentication provider. This section describes the tasks you must do to configure an optional SAML-based authentication provider for UIM Cloud Native Deployment.
Prerequisite: Inventory application has to be registered with Authentication Provider to generate Metadata File which is required during UIM Image creation. In case, Authentication Provider is chosen to be Identity Cloud Service, see Registering the Inventory Application in Identity Cloud Service in the Knowledge article #Doc ID 2956673.1.
To enable SAML-based authentication provider, add the corresponding customizations to the uim-cn-base image and layered image as follows:
- Extract uim-app-archive.zip for customization
references.
unzip workspace/uim-image-builder/staging/cnsdk/uim-model/uim-app-archive.zip -d workspace/customization
- Export the following variables as
required:
mkdir workspace/temp export WORKSPACEDIR=$(pwd)/workspace export CUSTOMFOLDER=$(pwd)/workspace/customization export TEMPDIR=$(pwd)/workspace/temp export DMANIFEST=$(pwd)/workspace/uim-image-builder/bin/ uim_cn_ci_manifest.yaml export STAGING=$(pwd)/workspace/uim-image-builder/staging
- Update $CUSTOMFOLDER/custom/plans/inventory-clusterPlan.xml as
follows:
- Change logoutURL
address
<variable> <name>logoutURL</name> <value>https://<instance>.<project>.uim.org:<LB_PORT>/saml2/sp/slo/init</value> </variable>
- Add new Variable assignment to inv.war and weblogic-web-app
root element, in order to remove
cookie-path:
<module-override> <module-name>inv.war</module-name> <module-type>war</module-type> <module-descriptor external="false"> <root-element>weblogic-web-app</root-element> <uri>WEB-INF/weblogic.xml</uri> <variable-assignment> <name>cookie-path</name> <xpath>/weblogic-web-app/session-descriptor/cookie-path</xpath> <operation>remove</operation> </variable-assignment> </module-descriptor>
- Add a new module in order to remove cookie-path from
unified-topology-ui.war:
<module-override> <module-name>unified-topology-ui.war</module-name> <module-type>war</module-type> <module-descriptor external="false"> <root-element>weblogic-web-app</root-element> <uri>WEB-INF/weblogic.xml</uri> <variable-assignment> <name>cookie-path</name> <xpath>/weblogic-web-app/session-descriptor/cookie-path</xpath> <operation>remove</operation> </variable-assignment> </module-descriptor> </module-override>
- Add a new module in order to remove cookie-path for
InventoryRSOpenAPI.war:
<module-override> <module-name>InventoryRSOpenAPI.war</module-name> <module-type>war</module-type> <module-descriptor external="false"> <root-element>weblogic-web-app</root-element> <uri>WEB-INF/weblogic.xml</uri> <variable-assignment> <name>cookie-path</name> <xpath>/weblogic-web-app/session-descriptor/cookie-path</xpath> <operation>remove</operation> </variable-assignment> </module-descriptor> </module-override>
- Change logoutURL
address
- Update security files in $CUSTOMFOLDER/custom/security/saml2:
- Place Identity Provider(IdP) metadatafile in $CUSTOMFOLDER/custom/security/saml2/.
- Copy
$CUSTOMFOLDER/custom/security/saml2/saml2idppartner.properties.sample to
$CUSTOMFOLDER/custom/security/saml2/saml2idppartner.properties and update
the details of description and
metadatafile:
saml2.idp.partners=customidp customidp.description=<IDP Partner> customidp.metadata.file=<IDPMetadata.xml> customidp.enabled=true customidp.redirectUris=/Inventory/* customidp.virtualUserEnabled=true
- Run the customization script and create UIM images, use
-c uim
as follows:./workspace/uim-image-builder/bin/customization.sh ./workspace/uim-image-builder/bin/build-uim-images.sh -f $DMANIFEST -s $STAGING -c uim
- After you build the uim-cn-base image with layered tag, update
the project.yaml file as
follows:
authentication: saml: enabled: true entityId: samlUIM #Use same entity id when configuring Idp Provider
- SSL Incoming configuration on UIM CN Instance should be enabled. See "Configuring Secure Incoming Access with SSL" for more information.
- Create a UIM
instance:
$UIM_CNTK/scripts/create-instance.sh -p sr -i quick -s $SPEC_PATH
Note:
To Integrate UIM with ATA and Message Bus when authentication is enabled, update the $UIM_CNTK/charts/uim/config/custom-config.properties file with appropriate values. See "Checklists for Integration of Services" section from Unified Inventory and Topology Deployment guide for more information on integrating UIM with ATA and Message Bus.Publishing UIM Cloud Native Service Provider Metadata File
If your identity provider supports SAML 2.0 client creation using the service provider metadata file, create a UIM metadata file in a cloud native environment as follows.
wlst.sh
connect('<weblogic-user-name>','<weblogic-password>','t3://sr-quick-ms1:8502')
serverRuntime()
cmo.getSingleSignOnServicesRuntime().publish('/logMount/UIMCNMetadata.xml', false)
disconnect()
exit()
kubectl cp sr-quick-admin:/logMount/UIMCNMetadata.xml ./UIMCNMetadata.xml -n sr
Enabling OAM Authentication
To enable Oracle Access Manager (OAM) authentication, add the corresponding customizations to the uim-cn-base image as follows:
-
Export additional variable ENABLEOAM:
export ENABLEOAM=true
-
Extract uim-app-archive.zip for customization references.
unzip workspace/uim-image-builder/staging/cnsdk/uim-model/uim-app-archive.zip -d workspace/customization
-
Export the following variables as required:
mkdir workspace/temp export WORKSPACEDIR=$(pwd)/workspace export CUSTOMFOLDER=$(pwd)/workspace/customization export TEMPDIR=$(pwd)/workspace/temp export DMANIFEST=$(pwd)/workspace/uim-image-builder/bin/uim_cn_ci_manifest.yaml export STAGING=$(pwd)/workspace/uim-image-builder/staging
-
Run the customization script and create UIM image, use
-c uim
as follows:./workspace/uim-image-builder/bin/customization.sh ./workspace/uim-image-builder/bin/build-uim-images.sh -f $DMANIFEST -s $STAGING -c uim
-
Build uim-cn-base image by running the build-uim-images script.
-
After you build the uim-cn-base image, update the project.yaml file as follows:
authentication: oam: enabled: true host: <oam-project>-<oam-instance>-oam-ohs # provide as <oam-project>-<oam-instance>-oam-ohs.<ohs-namespace>.svc.cluster.local if OHS service is running on a different namespace. port: 7777 frontendhost: <oam-instance>.<oam-project>.ohs.<oam-host-suffix> #Ex. quick.sr.ohs.uim.org
-
Update the instance.yaml file by setting the loadBalancerIP and loadBalancerPort values as follows:
# Mandatory, if OAM Authentication is enabled, set this value to loadbalancer IP. # If external hardware/software load balancer is used, set this value to that frontend host IP. # If OCI Load Balancer is used, then set externalLoadBalancerIP from OCI LBaaS # If Nginx/Traefik or any other Generic Ingress Controller is used, then set to one of the worker nodes loadBalancerIP: "" # For Generic and Traefik Ingress Controllers: # If ssl is enabled this would be loadbalancer's ssl port. # IF ssl is disabled this would be loadblancer's non ssl port. # For examle ssl and non-ssl ports for external loadbalancer would be 443 and 80 respectively. # If loadbalancer is not created, provide nodePort of Nginx/Traefik or any other Generic Ingress Controller loadBalancerPort: 80
Note:
Setting the loadBalancerIP and loadBalancerPort values is mandatory if OAM Authentication is enabled.
If an external hardware or software load balancer is used, set the loadBalancerIP value to front-end host IP value.
If OCI load balancer is used, set the loadBalancerIP to externalLoadBalancerIP value from OCI LBaaS.
If Nginx, Traefik, or any other Generic Ingress Controller is used, set the loadBalancerIP to one of the worker nodes.
-
If SSL is enabled on Identity Provider service, pass the truststore with Identity Provider certificates to UIM instance as follows:
-
Generate truststore by passing Identity Provider certificate:
keytool -importcert -v -alias <param> -file <path to IDP cert> -keystore idptrust.jks -storepass <password>
-
Create a Kubernetes secret by passing the generated truststore. The secret name should match the truststore name.
kubectl create secret generic <trustsecretname> -n <project> --from-file=<trustsecretname>.jks=idptrust.jks --from-literal=passphrase=<password>
-
-
Update Instance.yaml with truststore:
# SSL Configuration ssl: # Trust keystore trust: # provide trust name and identity to configure external SSL for SAF name: <trustsecretname> # Secret name that contains the truststore file. #Identity keystore identity: useDemoIdentity: true # set to false and specify the parameters below to use custom identity
-
Restart the UIM instance.
The URLs to access UIM UI and WebLogic UI when OAM authentication is enabled are as follows:
- The URL for access to the UIM UI:
https://<oam-instance>.<oam-project>.ohs.<hostSuffix>:<port>/Inventory sample https://sr.quick.ohs.uim.org:30444/Inventory
- The URL for access to the WebLogic UI:
https://admin.<uim-instance>.<uim-project>.uim.org:30444/console sample https://admin.quick.sr.uim.org:30444/console
- The URL for access to the UIM UI:
Working with Shapes
The UIM cloud native toolkit provides the following pre-configured shapes:
- charts/uim/shapes/dev.yaml. This can be used for development, QA and user acceptance testing (UAT) instances.
- charts/uim/shapes/devsmall.yaml. This can be used to reduce CPU requirements for small development instances.
- charts/uim/shapes/prod.yaml. This can be used for production, pre-production, and disaster recovery (DR) instances.
- charts/uim/shapes/prodlarge.yaml. This can be used for production, pre-production and disaster recovery (DR) instances that require more memory for UIM cartridges and order caches.
- charts/uim/shapes/prodsmall.yaml. This can be used to reduce CPU requirements for production, pre-production and disaster recovery (DR) instances. For example, it can be used to deploy a small production cluster with two managed servers when the input request rate does not justify two managed servers configured with a prod or prodlarge shape. For production instances, Oracle recommends two or more managed servers. This provides increased resiliency to a single point of failure and can allow order processing to continue while failed managed servers are being recovered.
You can create custom shapes using the pre-configured shapes. See "Creating Custom Shapes" for details.
The pre-defined shapes come in standard sizes, which enable you to plan your Kubernetes cluster resource requirement.
Table 8-1 Sizing Requirements of Shapes for a Managed Server
Shape | Kube Request | Kube Limit | JVM Heap (GB) |
---|---|---|---|
prodlarge | 80 GB RAM, 15 CPU | 80 GB RAM, 15 CPU | 64 |
prod | 48 GB RAM, 15 CPU | 48 GB RAM, 15 CPU | 31 |
prodsmall | 48 GB RAM, 7.5 CPU | 48 GB RAM, 7.5 CPU | 31 |
dev | 8 GB RAM, 2 CPU | 8 GB RAM | 5 |
devsmall | 8 GB RAM, 0.5 CPU | 8 GB RAM | 5 |
The following table lists the sizing requirements of the shapes for an admin server:
Table 8-2 Sizing Requirements of Shapes for an Admin Server
Shape | Kube Request | Kube Limit | JVM Heap (GB) |
---|---|---|---|
prodlarge | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
prod | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
prodsmall | 8 GB RAM, 2 CPU | 8 GB RAM | 4 |
dev | 3 GB RAM, 1 CPU | 3 GB RAM | 1 |
devsmall | 3 GB RAM, 0.5 CPU | 4 GB RAM | 1 |
These values are encoded in the specifications and are automatically part of the individual pod configuration. The Kubernetes schedulers evaluate the Kube request settings to find space for each pod in the worker nodes of the Kubernetes cluster.
- Number of development instances required to be running in parallel: D
- Number of managed servers expected across all the development instances: Md (Md will be equal to D if all the development instances are 1 MS instances)
- Number of production (and production-like) instances required to be running in parallel: P
- Number of managed servers expected across all production instances: Mp
- Assume use of "dev" and "prod" shapes
- CPU requirement (CPUs) = D * 1 + Md * 2 + P * 2 + Mp * 15
- Memory requirement (GB) = D * 4 + Md * 8 + P * 8 + Mp * 48
Note:
The production managed servers take their memory and CPU in large chunks. Kube scheduler requires the capacity of each pod to be satisfied within a particular worker node and does not schedule the pod if that capacity is fragmented across the worker nodes.The shapes are pre-tuned for generic development and production environments. You can create a UIM instance with either of these shapes, by specifying the preferred one in the instance specification.
# Name of the shape. The UIM cloud native shapes are devsmall, dev, prodsmall, prod, and prodlarge.
# Alternatively, custom shape name can be specified (as the filename without the extension)
Creating Custom Shapes
You create custom shapes by copying the provided shapes and then specifying the desired tuning parameters. Do not edit the values in the shapes provided with the toolkit.
- The number of threads allocated to UIM work managers
- UIM connection pool parameters
To create a custom shape:
- Copy one of the pre-configured shapes and save it to your source repository.
- Rename the shape and update the tuning parameters as required.
- In the instance specification, specify the name of the shape you copied
and renamed:
shape: custom
- Create the domain, ensuring that the location of your custom shape is
included in the comma separated list of directories passed with
-s
.$UIM_CNTK/scripts/create-instance.sh -p project -i instance -s $SPEC_PATH
Note:
While copying a pre-configured shape or editing your custom shape, ensure that you preserve any configuration that has comments indicating that it must not be deleted.Choosing Worker Nodes for Running UIM Cloud Native
By default, UIM cloud native has its pods scheduled on all worker nodes in the Kubernetes cluster in which it is installed. However, in some situations, you may want to choose a subset of nodes where pods are scheduled.
- Limitation on the deployment of UIM on specific worker nodes per team for reasons such as capacity management, chargeback, budgetary reasons, and so on.
# If UIM CN instances must be targeted to a subset of worker nodes in the
# Kubernetes cluster, tag those nodes with a label name and value, and choose
# that label+value here.
# keys:
# - key : any node label key
# - operator : Valid operators are In, NotIn,Exists, DoesNotExist. Gt, and Lt.
# - values : values is an array of string values.
# If the operator is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the values array must be empty (values can be removed from below).
# If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer.
#
# This can be overriden in instance specification if required.
# Node Affinity can be achieved by operator "In" and Node Anti-Affinity by "NotIn"
# oracle.com/licensed-for-coherence is just an indicative example; any
# label and its values can be used for choosing nodes.
uimcnTargetNodes: {} # This empty declaration should be removed if adding items here.
#uimcnTargetNodes:
# nodeLabel:
# keys:
# - key: oracle.com/licensed-for-coherence
# operator: In
# values:
# - true
# - key: failure-domain.beta.kubernetes.io/zone
# operator: NotIn
# values:
# - PHX-AD-2
# - PHX-AD-3
- There is no restriction on node label key. Any valid node label can be used.
- There can be multiple valid values for a key.
- You can override this configuration in the instance specification yaml file, if required.
Examples
In the following example, pods are created on the nodes that have keys as failure-domain.beta.kubernetes.io/zone and the values as PHX-AD-2 or PHX-AD-3:
# Example1
#uimcnTargetNodes: {}
uimcnTargetNodes:
nodeLabel:
keys:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- PHX-AD-2
- PHX-AD-3
In the following example, pods are created on the nodes that do not have keys as name and have keys as failure-domain.beta.kubernetes.io/zone and the values as neither PHX-AD-2 nor PHX-AD-3.
# Example2
# uimcnTargetNodes: {} # This empty declaration should be removed if adding items here.
uimcnTargetNodes:
nodeLabel:
keys:
- key: name
operator: DoesNotExist
- key: failure-domain.beta.kubernetes.io/zone
operator: NotIn
values:
- PHX-AD-2
- PHX-AD-3
Working with Ingress, Ingress Controller, and External Load Balancer
A Kubernetes ingress is responsible for establishing access to back-end services. However, creating an ingress is not sufficient. An Ingress controller connects the back-end services with the front-end services that are external to Kubernetes through edge objects such as NodePort services, Load Balancers, and so on. In UIM cloud native, an ingress controller can be configured in the project specification.
UIM cloud native supports annotation-based generic ingress creation that uses standard Kubernetes Ingress API as verified by Kubernetes Conformance tests. This can be used for any Kubernetes certified ingress controller, if that ingress controller offers annotations that are usually proprietary to the ingress controller, required for UIM. Annotations applied to an ingress resource allow you to use features such as connection timeout, URL rewrite, retry, additional headers, redirects, sticky cookie services, and so on, and to improve the performance of that ingress resource. The ingress controllers support a corresponding set of annotations. For information on annotations that are supported by your ingress controller and the list of various ingress controllers, see https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/.
Any ingress controller, which conforms to the standard Kubernetes ingress API and supports annotations needed by UIM should work, although Oracle does not certify individual ingress controllers to confirm this generic compatibility.
For information about about Ingress NGINX Controller, see https://github.com/kubernetes/ingress-nginx/blob/main/README.md#readme.
The configurations required in your project specification are as follows:
# valid values are TRAEFIK, GENERIC, OTHER
ingressController: "GENERIC"
You need to provide the following annotations to enable cookies that can meet the hardware sizing requirements.
ingressClassName
value for your ingress
controller under the ingress.className field. Based on the value provided, an
ingress object is created for that ingress class as
follows:ingress:
className: nginx ##provide ingressClassName value, default value for nginx ingressController is nginx.
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "nginxingresscookie"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
Annotations for Enabling SSL Terminate Strategy
loadBalancerPort: 30543 #Provide LoadBalancer SSL port
ssl:
incoming: true
ingress:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_clear_input_headers "WL-Proxy-Client-IP" "WL-Proxy-SSL";
more_set_input_headers "X-Forwarded-Proto: https";
more_set_input_headers "WL-Proxy-SSL: true";
nginx.ingress.kubernetes.io/ingress.allow-http: "false"
For more information on trust and identity provided in the above configuration, see "Setting Up Secure Communication with SSL".
Using Traefik Ingress Controller
Oracle recommends leveraging standard Kubernetes ingress API, with any ingress controller that supports annotations for the configurations described in this document.
# valid values are TRAEFIK, GENERIC, OTHER
ingressController: "TRAEFIK"
Using an Alternate Ingress Controller
By default, UIM cloud native supports standard Kubernetes ingress API and
provides sample files for integration. If your required ingress controller does not
support one or more configurations through annotations on generic ingress, or you use
your ingress controller's CRD instead, you can choose "OTHER"
.
By choosing this option, UIM cloud native does not create or manage any ingress required for accessing the UIM cloud native services. However, you may choose to create your own ingress objects based on the service and port details mentioned in the tables that follow. The toolkit uses an ingress Helm chart ($UIM_CNTK/samples/charts/ingress-per-domain/templates/generic-ingress.yaml) and scripts for creating the ingress objects. If you want to use a generic ingress controller, these samples can be used as a reference and customized as necessary.
- domainUID: Combination of project-instance. For example, sr-quick.
- clusterName: The name of the cluster in lowercase. Replace any hyphens "-" with underscore "_". The default name of the cluster in values.yaml is uimcluster.
The following table lists the service name and service ports for Ingress rules:
Table 8-3 Service Name and Service Ports for Ingress Rules
Rule | Service Name | Service Port | Purpose |
---|---|---|---|
instance.project.loadBalancerDomainName | domainUID-cluster-clusterName |
8502 |
For access to UIM through UI, Web Services, and so on. |
t3.instance.project.loadBalancerDomainName | domainUID-cluster-clusterName | 30303 | UIM T3 Channel access for WLST, JMS, and SAF clients. |
admin.instance.project.loadBalancerDomainName | domainUID-admin |
8501 8504 ( if ssl reencrypt strategy is enabled) |
For access to UIM WebLogic Admin Console UI. |
Ingresses need to be created for each of the above rules per the following guidelines:
- Before running create-instance.sh, ingress must be created.
- After running delete-instance.sh, ingress must be deleted.
You can develop your own code to handle your ingress controller or copy the
sample ingress-per-domain
chart and add additional
template files for your ingress controller with a new value for the type (for example,
NGINX).
- The reference sample for creation is: $UIM_CNTK/scripts/create-ingress.sh
- The reference sample for deletion is: $UIM_CNTK/scripts/delete-ingress.sh
Note:
Regardless of the choice of Ingress controller, it is mandatory to
provide the value of loadBalancerPort
in one of the
specification files. This is used for establishing front-end cluster.
Reusing the Database State
When a UIM instance is deleted, the state of the database remains unaffected, which makes it available for re-use. This is common in the following scenarios:
- When an instance is deleted and the same instance is re-created using the same project and the instance names, the database state is unaffected. For example, consider a performance instance that does not need to be up and running all the time, consuming resources. When it is no longer actively being used, its specification files and PDB can be saved and the instance can be deleted. When it is needed again, the instance can be rebuilt using the saved specifications and the saved PDB. Another common scenario is when developers delete and re-create the same instance multiple times while configuration is being developed and tested.
- When a new instance is created to point to the data of another instance with a new project and instance names, the database state is unaffected. A developer, who might want to create a development instance with the data from a test instance in order to investigate a reported issue, is likely to use their own instance specification and the UIM data from PDB of the test instance.
- The UIM DB (schema and data)
- The RCU DB (schema and data)
Recreating an Instance
You can re-create a UIM instance with the same project and instance names, pointing to the same database. In this case, both the UIM DB and the RCU DB are re-used, making the sequence of events for instance re-creation relatively straightforward.
- PDB
- The project and instance specification files
Reusing the UIM Schema
To reuse the UIM DB, the secret for the PDB must still exist:
project-instance-database-credentials
project-instance-database-credentials.
This is the uimdb
credential in the
manage-instance-credentials.sh script.
Reusing the RCU
- project-instance
-rcudb-credentials
. This is thercudb
credential. - project-instance
-opss-wallet-password-secret
. This is theopssWP
credential. - project-instance
-opss-walletfile-secret
. This is theopssWF
credential.
To import the wallet file from previous installation of UIM:
- Run the WLST exportEncryptionKey command in the previous domain where the RCU is been referenced. This generates ewallet.p12 file.
- Export the wallet file for generating OPSS wallet on the existing
schema and use it to create UIM CN instance as follows:
- Connect to the previous UIM WebLogic domain that refers the
RCU schemas as
follows:
$cd<FWM_HOME>/oracle_common/common/bin>$./wlst.shwls:/offline> exportEncryptionKey(jpsConfigFile, keyFilePath, keyFilePassword) Export of Encryption key(s) is done. Remember the password chosen, it will be required while importing the key(s)
Where:
- keyFilePassword is same as the OPSS wallet file password that is used during the secrets creation.
- jpsConfigFile specifies the location of jps-config.xml file that corresponds to the location where the command is processed.
- keyFilePath specifies the directory where ewallet.p12 file is created. The content of this file is encrypted and secured by the value passed to keyFilePassword.
- keyFilePassword specifies the password to secure ewallet.p12 file. This password must be used while importing the file.
- Download the generated ewallet.p12 file from TEMP_LOCATION folder and copy it into the Kubernetes worker node in $SPEC_PATH.
- Connect to the previous UIM WebLogic domain that refers the
RCU schemas as
follows:
- Convert the generated ewallet file to Base64 encoded format as
follows:
$cd $SPEC_PATH $base64 ewallet.p12 > ewalletbase64.P12
- Create OPSSWF secret using the Base64 ewallet as
follows:
$UIM_CNTK/scripts/manage-instance-credentials.sh -p project -i instance create opssWF
- Enter the Base64 ewallet file location:
$SPEC_PATH/ewalletbase64.p12.
Note:
ForopssWP
andwlsRTE
, use the same password that you used while exporting the wallet file. - Create the instance as you would normally
do:
$UIM_CNTK/scripts/create-instance.sh -p project -i instance -s $SPEC_PATH
Note:
If theopssWP
and opssWF
secrets no longer
exist and cannot be re-created from offline data, then drop the RCU schema and
re-create it using the UIM DB Installer.
Creating a New Instance
If the original instance does not need to be retained, then the original PDB can be re-used directly by a new instance. If however, the instance needs to be retained, then you must create a clone of the PDB of the original instance. This section describes using a newly cloned PDB for the new instance.
If possible, ensure that the images specified in the project specification (project.yaml) match the images in the specification files of the original instance.
Reusing the UIM Schema
uimdb
credential in manage-instance-credentials.sh and points to your cloned
PDB:project-instance-database-credentials
If your new instance must reference a newer UIM DB installer image in its specification files than the original instance, it is recommended to invoke an in-place upgrade of UIM schema before creating the new instance.
# Upgrade the UIM schema to match new instance's specification files
# Do nothing if schema already matches
$UIM_CNTK/scripts/install-uimdb.sh -p project -i instance -s $SPEC_PATH -c 3
Note:
If the current instance details are different than the previous instance, to reuse the UIM schema, drop the table with suffix WL_LLR_.- Create a new RCU
- Reuse RCU
Creating a New RCU
If you only wish to retain the UIM schema data, then you can create a new RCU schema.
The following steps provide a consolidated view of RCU creation described in "Managing Configuration as Code".
- project-instance
-rcudb-credentials
. This is thercudb
credential and describes the new RCU schema you want in the clone. - project-instance
-opss-wallet-password-secret
. This is theopssWP
credential unique to your new instance
# Create a fresh RCU DB schema while preserving UIM schema data
$UIM_CNTK/scripts/install-uimdb.sh -p project -i instance -s $SPEC_PATH -c 2
With
this approach, the RCU schema from the original instance is still available in the
cloned PDB, but is not used by the new instance.Reusing the RCU
Using the manage-instance-credentials.sh script, create the following secret using your new project and instance names:
project-instance-rcudb-credentials
The secret should describe the old RCU schema, but with new PDB details.
-
Reusing RCU Schema Prefix
Over time, if PDBs are cloned multiple times, it may be desirable to avoid the proliferation of defunct RCU schemas by re-using the schema prefix and re-initializing the data. There is no UIM metadata stored in the RCU DB so the data can be safely re-initialized.
project-instance
-opss-wallet-password-secret
. This is theopssWP
credential unique to your new instance.To re-install the RCU, invoke DB Installer:$UIM_CNTK/scripts/install-uimdb.sh -p project -i instance -s $SPEC_PATH -c 2
-
Reusing RCU Schema and Data
In order to reuse the full RCU DB from another instance, the original
opssWF
andopssWP
must be copied to the new environment and renamed following the convention: project-instance-opss-wallet-password-secret and project-instance-opss-walletfile-secret.This directs Fusion MiddleWare OPSS to access the data using the secrets.
$UIM_CNTK/scripts/create-instance.sh -p
project -i instance -s $SPEC_PATH
Setting Up Persistent Storage
UIM cloud native can be configured to use a Kubernetes Persistent Volume to store data that needs to be retained even after a pod is terminated. This data includes application logs, JFR recordings and DB Installer logs, but does not include any sort of UIM state data. When an instance is re-created, the same persistent volume need not be available. When persistent storage is enabled in the instance specification, these data files, which are written inside a pod are re-directed to the persistent volume.
Data from all instances in a project may be persisted, but each instance does not need a unique location for logging. Data is written to a project-instance folder, so multiple instances can share the same end location without destroying data from other instances.
The final location for this data should be one that is directly visible to the users of UIM cloud native. The development instances may simply direct data to a shared file system for analysis and debugging by cartridge developers. Whereas, formal test and production instances may need the data to be scraped by a logging toolchain such as EFK, that can then process the data and make it available in various forms. The recommendation therefore is to create a PV-PVC pair for each class of destination within a project. In this example, one for developers to access and one that feeds into a toolchain.
A PV-PVC pair would be created for each of these "destinations", that multiple instances can then share. A single PVC can be used by multiple UIM domains. The management of the PV and PVC lifecycles is beyond the scope of UIM cloud native.
The UIM cloud native infrastructure administrator is responsible for creating and deleting PVs or for setting up dynamic volume provisioning.
The UIM cloud native project administrator is responsible for creating and deleting PVCs as per the standard documentation in a manner such that they consume the pre-created PVs or trigger the dynamic volume provisioning. The specific technology supporting the PV is also beyond the scope of UIM cloud native. However, samples for PV supported by NFS are provided.
Creating a PV-PVC Pair
The technology supporting the Kubernetes PV-PVC is not dictated by UIM cloud native. Samples have been provided for NFS and BV, and can either be used as is, or as a reference for other implementations.
To create a PV-PVC pair supported by NFS:
- Edit the sample PV and PVC yaml files and update entries with enclosing
brackets
Note:
PVCs need to be ReadWriteMany.
vi $UIM_CNTK/samples/nfs/pv.yaml vi $UIM_CNTK/samples/nfs/pvc.yaml
- Create the Kubernetes PV and
PVC.
kubectl create -f $UIM_CNTK/samples/nfs/pv.yaml kubectl create -f $UIM_CNTK/samples/nfs/pvc.yaml
- Set up the storage volume. The storage volume type is
emptydir
by default.storageVolume: type: emptydir # Acceptable values are pvc and emptydir volumeName: storage-volume # pvc: storage-pvc #Specify this only if case type is PVC isBlockVolume: false # set this to true if BlockVolume is used
Deleting the pod that has storage volume type asemtpydir
deletes the corresponding logs. To retain the logs:- Set the storageVolume.type to pvc.
- Uncomment storageVolume.pvc.
- Specify the name of the pvc created.
# The storage volume must specify the PVC to be used for persistent storage. storageVolume: type: pvc # Acceptable values are pvc and emptydir volumeName: storage-volume pvc: storage-pvc #Specify this only if case type is PVC isBlockVolume: false # set this to true if BlockVolume is used
[oracle@localhost project-instance]$ dir
server, UIM, uim-dbinstaller
To create a PV-PVC pair supported by BV:
- Edit the sample PV and PVC yaml files and update entries with enclosing
brackets:
vi $UIM_CNTK/samples/bv/pv.yaml vi $UIM_CNTK/samples/bv/pvc.yaml
- Create the Kubernetes PV and PVC as
follows:
kubectl create -f $UIM_CNTK/samples/bv/pv.yaml kubectl create -f $UIM_CNTK/samples/bv/pvc.yaml
- Repeat step 1 and 2 to create PV-PVCs required for all the servers such as
introspector, admin, db-installer, and for each managed server.
Note:
Do not provide<server-name>
in the prefix for db-installer PV-PVC. - Set up the storage volume. By default the type is set to
emtydir
:storageVolume: type: emptydir # Acceptable values are pvc and emptydir volumeName: storage-volume # pvc: storage-pvc #Specify this only if case type is PVC isBlockVolume: false # set this to true if BlockVolume is used
- To use Block Volume:
- Set
StorageVolume.type
to pvc. - Uncomment line
#pvc: storage-pvc
and replacestorage-pvc
with the appropriate suffix of all the PVCs, which is same as the name of db-installer PVC. - Set
StorageVolume.isBlockVolume
to true.
storageVolume: type: pvc # Acceptable values are pvc and emptydir volumeName: dev-nfs-pv pvc: <project>-<storage-endpoint>-bv-pvc #this is equal to suffix of pvc and equal to pvc used by db-installer isBlockVolume: true # set this to true if BlockVolume is used
- Set
- Change permissions of blockVolume using initContainer. By default,
initContainerImage is commented. Uncomment it and mention the
corresponding image name that you want to
use.
#uncomment this to use initContainer for introspector,admin,db-installer,ms pods and change permission of mount volume initContainerImage: "container-registry.oracle.com/os/oraclelinux:8-slim"
Managing Logs
UIM cloud native generates traditional textual logs. By default, these log files are generated in the managed server pod, but can be re-directed to a Persistent Volume Claim (PVC) supported by the underlying technology that you choose. See "Setting Up Persistent Storage" for details.
emptydir
by
default.storageVolume:
type: emptydir # Acceptable values are pvc and emptydir
volumeName: storage-volume
# pvc: storage-pvc #Specify this only if case type is PVC
emtpydir
deletes the
corresponding logs. To retain the logs:
- Set the storageVolume.type to pvc.
- Uncomment storageVolume.pvc.
- Specify the name of the pvc created.
# The storage volume must specify the PVC to be used for persistent storage.
storageVolume:
type: pvc # Acceptable values are pvc and emptydir
volumeName: storage-volume
pvc: storage-pvc #Specify this only if case type is PVC
- The UIM application logs can be found at: pv-directory/project-instance/UIM/logs
- The UIM WebLogic server logs can be found at: pv-directory/project-instance/server
- The UIM DB Installer logs can be found at: pv_directory/project-instance/uim-dbinstaller/logs
Viewing Logs using Fluentd and OpenSearch Dashboard
You can view and analyze the UIM cloud native logs using Fluentd and OpenSearch dashboard.
The logs are generated as follows:
-
Fluentd collects the text logs that are generated during cloud native deployments and sends them to OpenSearch.
-
OpenSearch collects all types of logs and converts them into a common format so that OpenSearch dashboard can read and display the data.
-
OpenSearch dashboard reads the data and presents it in a simplified view.
Setting up OpenSearch Dashboard and Fluentd
To set up OpenSearch Dashboard and Fluentd:
- Set up OpenSearch and OpenSearch dashboard. See "Setting Up OpenSearch" in Unified Inventory and Topology Deployment Guide for more information.
- Update the following in instance.yaml to enable the sidecar
injection:
sidecar: enabled: true containers: - template: "fluentd-container" volumeTemplate: "fluentd-configmap-volume" containerFiles: - fluentd-config-map.yaml - _fluentd-sidecar-container.tpl
- Update the values for FLUENT_OPENSEARCH_HOST, FLUENT_OPENSEARCH_PORT, OPENSEARCH_USER, and OPENSEARCH_PASSWORD in $UIM_CNTK/samples/customExtensions/sidecar-fluentd/_fluentd-sidecar-container.tpl.
- (Optional) Update the FluentD ConfigMap file in the customExtensions folder to add customizations for selecting or adding any required logs.
- In the Kubernetes pod, create an instance with sidecar injection as
follows:
$UIM_CNTK/scripts/create-instance.sh -p <project_name> -i <instance_name> -s <Path_to_specification_files> -m $UIM_CNTK/samples/customExtensions/sidecar-fluentd
To access the logs on OpenSearch dashboard, create an Index Pattern as follows:
- Click on the Three Bars icon.
- Under OpenSearch Dashboards, select Discover.
- Create a new Index Pattern using
index <project>-<instance>
.The logs can be accessed on the Discover page under
<project>-<instance> index
.
Enabling GC Logs
You can monitor the Java garbage collection data by using GC logs. By default, these GC logs are disabled and you can enable them to view the logs at /logMount/<domain>/servers/<server-name>.
To enable the GC logs, update <project.yaml> from $SPEC_PATH as follows:
- Under gcLogs make
enabled
as true. - To configure the maximum size of each file and limit for number of files, set
fileSize
andnoOfFiles
inside gcLogs.gcLogs: enabled: true fileSize: 10M noOfFiles: 10
Managing UIM Cloud Native Metrics
All managed server pods running UIM cloud native carry annotations added by WebLogic Operator and additional annotation by UIM cloud native.
uimcn.metricspath: /Inventory/metrics
uimcn.metricsport: 8502
Configuring Prometheus for UIM Cloud Native Metrics
The following job configuration has to be added to Prometheus configuration, replace the username & password for the UIM metric endpoint:
- job_name: 'uimcn'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: ['__meta_kubernetes_pod_annotationpresent_uimcn_metricspath']
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_annotation_uimcn_metricspath]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: ['__meta_kubernetes_pod_annotation_prometheus_io_scrape']
action: 'drop'
regex: 'false'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_uimcn_metricsport]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
#- action: labelmap
# regex: __meta_kubernetes_pod_label_(.+)
- source_labels: ['__meta_kubernetes_pod_label_weblogic_serverName']
action: replace
target_lable: server_name
- source_labels: ['__meta_kubernetes_pod_label_weblogic_clusterName']
action: replace
target_label: cluster_name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
basic_auth:
username: <METRICS_UESR_NAME>
password: <PASSWORD>
Note:
UIM cloud native has been tested with Prometheus and Grafana installed and configured using the Helm chart prometheus-community/kube-prometheus-stack available at: https://prometheus-community.github.io/helm-charts.Viewing UIM Cloud Native Metrics Without Using Prometheus
http://instance.project.domain_Name:LoadBalancer_Port/Inventory/metrics
By default, domain_Name
is set to
uim.org and can be modified in project.yaml. This only provides
metrics of the managed server that is serving the request. It does not provide
consolidated metrics for the entire cluster. Only Prometheus Query and Grafana
dashboards can provide consolidated metrics.
Viewing UIM Cloud Native Metrics in Grafana
UIM cloud native metrics scraped by Prometheus can be made available for further processing and visualization. The UIM cloud native toolkit comes with sample Grafana dashboards to get you started with visualizations.
Import the dashboard JSON files from $UIM_CNTK/samples/grafana into your Grafana environment.
- UIM by Services: Provides a view of UIM cloud native metrics for one or more instances in the selected managed server.
Exposed UIM Service Metrics
The following UIM metrics are exposed via Prometheus APIs.
Note:
- All metrics are per managed server. Prometheus Query Language can be used to combine or aggregate metrics across all managed servers.
- All metric values are short-lived and indicate the number of requests in a particular state since the managed server was last restarted.
- When a managed server restarts, all the metrics are reset to 0.
Interaction Metrics
The following table lists interaction metrics exposed via Prometheus APIs.
Table 8-4 Interaction Metrics Exposed via Prometheus APIs
Name | Type | Help Text | Notes |
---|---|---|---|
uim_sfws_capture_requests | Summary | Summary that tracks the duration of
sfws capture requests.
|
This metric is observed for the CaptureInteraction request. The action can be CREATE or CHANGE. |
uim_sfws_process_requests | Summary | Summary that tracks the duration of
sfws process requests.
|
This metric is observed for the ProcessInteraction request. The action can be PROCESS. |
uim_sfws_update_requests | Summary | Summary that tracks the duration of
sfws update requests.
|
This metric is observed for the UpdateInteraction request. The action can be APPROVE, ISSUE, CANCEL, COMPLETE or CHANGE. |
uim_sfws_requests | Summary | Summary that tracks the duration of
sfws requests.
|
This metric is observed for the capture, process, and update interaction requests. |
Labels For All Interaction Metrics
The following table lists labels for all interaction metrics.
Table 8-5 Labels for All Metrics
Label Name | Sample Value |
---|---|
action | The values can be CREATE, CHANGE, APPROVE, CANCEL, COMPLETE, and CANCEL. |
Service Metrics
The following metrics are captured for completion of a business interaction.
Table 8-6 Service Metrics Captured for Completion of a Business Interaction
Name | Type | Help Text | Summary |
---|---|---|---|
uim_services_processed | Counter | Counter that tracks the number of services processed. | This metric is observed for suspend, resume, complete, and cancel of a service. |
Labels for all Service Metrics
A task metric has all the labels that a service metric has.
Table 8-7 Labels for All Service Metrics
Label | Sample Value | Notes | Source of Label |
---|---|---|---|
spec | VoipServiceSpec | The service specification name. | UIM Metric Label Name/Value |
status | IN_SERVICE |
The service status. The values can be IN_SERVICE, SUSPEND, DISCONNECT, and CANCELLED. |
UIM Metric Label Name/Value |
Generic Labels for all Metrics
Following are the generic labels for all metrics:
Table 8-8 Generic Labels for all Metrics
Label Name | Sample Value | Source of the Label |
---|---|---|
server_name | ms1 | Prometheus Kubernetes SD |
job | cmcn | Prometheus Kubernetes SD |
namespace | sr | Prometheus Kubernetes SD |
pod_name | ms1 | WebLogic Operator Pod Label |
weblogic_cluseterName | uimcluster | WebLogic Operator Pod Label |
weblogic_clusterRestartVersion | v1 | WebLogic Operator Pod Label |
weblogic_createdByOperator | true | WebLogic Operator Pod Label |
weblogic_domainName | domain | WebLogic Operator Pod Label |
weblogic_domainRestartVersion | v1 | WebLogic Operator Pod Label |
weblogic_domainUID | quicksr | WebLogic Operator Pod Label |
Managing WebLogic Monitoring Exporter (WME) Metrics
UIM cloud native provides a sample Grafana dashboard that you can use to visualize WebLogic metrics available from a Prometheus data source.
You use the WebLogic Monitoring Exporter (WME) tool to expose WebLogic server metrics. WebLogic Monitoring Exporter is part of the WebLogic Kubernetes Toolkit. It is an open source project, based at: https://github.com/oracle/weblogic-monitoring-exporter. You can include WME in your UIM cloud native images. Once a UIM cloud native image with WME is generated, creating a UIM cloud native instance with that image automatically deploys a WME WAR file to the WebLogic server instances. While WME metrics are available through WME Restful Management API endpoints, UIM cloud native relies on Prometheus to scrape and expose these metrics. This version of UIM supports WME 1.3.0. See WME documentation for details on configuration and exposed metrics.
Generating the WME WAR File
mkdir -p ~/wme
cd ~/wme
curl -x $http_proxy -L https://github.com/oracle/weblogic-monitoring-exporter/releases/download/v1.3.0/wls-exporter.war -o wls-exporter.war
curl -x $http_proxy https://raw.githubusercontent.com/oracle/weblogic-monitoring-exporter/v1.3.0/samples/kubernetes/end2end/dashboard/exporter-config.yaml -o config.yml
jar -uvf wls-exporter.war config.yml
Deploying the WME WAR File
After the WME WAR file is generated and updated, you can deploy it as a custom application archive.
For details about deploying entities, see "Deploying Entities to a UIM WebLogic Domain".
appDeployments:
Application:
'wls-exporter':
SourcePath: 'wlsdeploy/applications/wls-exporter.war'
ModuleType: war
StagingMode: nostage
PlanStagingMode: nostage
Target: '@@PROP:ADMIN_NAME@@ , @@PROP:CLUSTER_NAME@@'
Configuring the Prometheus Scrape Job for WME Metrics
Note:
In thebasic_auth
section, specify the WebLogic username and
password.
- job_name: 'basewls'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: ['__meta_kubernetes_pod_annotation_prometheus_io_scrape']
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_label_weblogic_createdByOperator]
action: 'keep'
regex: 'true'
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
basic_auth:
username: weblogic_username
password: weblogic_password
Viewing WebLogic Monitoring Exporter Metrics in Grafana
WebLogic Monitoring Exporter metrics scraped by Prometheus can be made available for further processing and visualization. The UIM cloud native toolkit comes with sample Grafana dashboards to get you started with visualizations. The WebLogic server dashboard provides a view of WebLogic Monitoring Exporter metrics for one or more managed servers for a given instance in the selected project namespace.
Import the sample dashboard weblogic_dashboar.json file from https://github.com/oracle/weblogic-monitoring-exporter/blob/master/samples/kubernetes/end2end/dashboard into your Grafana environment by selecting Prometheus as the data source.