4 New Cloud Native Features
Learn about the new features in the Oracle Communications Billing and Revenue Management (BRM) cloud native deployment option.
Topics in this document:
New Features in Cloud Native 15.1
BRM cloud native 15.1 includes the following enhancements:
BRM Cloud Native Pods Now Use Dynamic Volume Provisioning
BRM cloud native pods now use dynamic volume provisioning by default. However, you can modify one or more pods to use static volumes instead to meet your business requirements. To do so, add the createOption key to the override-values.yaml file for each pod for which you want to use static volumes and then redeploy your Helm charts.
For information about changing dynamic volume provisioning to static volume provisioning, see "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.
Default Service Type Changes in BRM Cloud Native
The default service types of several BRM cloud native services have been changed.
Table 4-1 lists the cloud native services that are now deployed as ClusterIP by default, but were deployed as NodePort in previous releases.
Table 4-1 Services Now Deployed as ClusterIP
Service Name | Description of Change |
---|---|
brm-rest-services-manager |
BRM REST Services Manager creates ClusterIP by default. To deploy this service as NodePort instead, set the ocrsm.rsm.service.type key to NodePort in your override-values.yaml file for oc-cn-helm-chart. |
pdcrsm |
This service is now deployed as ClusterIP by default. To deploy this service as NodePort instead, set the ocpdcrsm.service.type key to NodePort in your override-values.yaml file for oc-cn-helm-chart. |
wsm-tomcat |
This service is now deployed as ClusterIP by default. To deploy the service as NodePort instead, set the ocbrm.wsm.deployment.tomcat.service.type key to ClusterIP in your override-values.yaml file for oc-cn-helm-chart. |
Table 4-2 lists the cloud native services that are no longer deployed as NodePort by default. You use a different cloud native service to deploy with a ClusterIP service type.
Table 4-2 NodePort Services No Longer Created By Default
Service Name | Description of Changes |
---|---|
bcws-domain-admin-server-ext |
Billing Care REST API no longer creates the NodePort external service (bcws-domain-admin-server-ext) by default. You can use the ClusterIP service (bcws-domain-admin-server) instead. To configure Billing Care REST API to create the NodePort external service, set the ocbc.bcws.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart. |
billingcare-domain-admin-server-ext |
Billing Care no longer creates the NodePort external service (billingcare-domain-admin-server-ext) by default. You can use the ClusterIP service (billingcare-domain-admin-server) instead. To configure Billing Care to create the NodePort external service, set the ocbc.bc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart. |
boc-domain-admin-server-ext |
Business Operations Center no longer creates the NodePort external service (boc-domain-admin-server-ext) by default. You can use the ClusterIP service (boc-domain-admin-server) instead. To configure Business Operations Center to create the NodePort external service, set the ocboc.boc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart. |
brmdomain-admin-server-ext |
BRM Web Services Manager no longer creates the NodePort external service (brmdomain-admin-server-ext) by default. You can use the ClusterIP service (brmdomain-admin-server) instead. To configure BRM Web Services Manager to create the NodePort external service, set the ocbrm.wsm.deployment.weblogic.adminServerNodePort key to the NodePort where the admin-server's HTTP service will be accessible in your override-values.yaml file for oc-cn-helm-chart. |
pcc-domain-admin-server-ext |
Pipeline Configuration Center no longer creates the NodePort external service (pcc-domain-admin-server-ext) by default. You can use the ClusterIP service (pcc-domain-admin-server) instead. To configure Pipeline Configuration Center to create the NodePort external service, set the ocpcc.pcc.wop.adminChannelPort key to the NodePort where the admin-server's HTTP service will be accessible. You must set this key in the override-values.yaml file for oc-cn-helm-chart. |
pdc-service |
PDC no longer creates the NodePort external service (pdc-service) by default. To configure PDC to create the NodePort external service, set the ocpdc.service.type key to NodePort in your override-values.yaml file for oc-cn-op-job-helm-chart. |
All Cloud Native Containers Now Support Requests and Limits
All BRM cloud native containers now support the setting of minimum and maximum CPU and memory values. This feature helps prevent containers from consuming too many resources, which can lead to system crashes.
For more information, see "Setting Minimum and Maximum CPU and Memory Values" in BRM Cloud Native System Administrator's Guide.
BRM Cloud Native Now Supports Using External Kubernetes Secrets
You can now create BRM cloud native KeyStore certificates as Kubernetes Secrets in two different ways:
-
Pre-create the BRM cloud native KeyStore certificates as Secrets in the Kubernetes cluster. Pre-creating the Kubernetes Secrets eases maintenance because it allows you to replace KeyStore certificates without updating the values.yaml files and performing a Helm install or upgrade.
-
Have the BRM cloud native installer create the Kubernetes Secrets for you. In this case, you store the KeyStore certificates in the cloud native Helm charts. During the Helm install or upgrade process, the KeyStores are created as Kubernetes Secrets, which eventually end up as Secrets in the Kubernetes cluster.
In previous releases, you could only store the certificates in the Helm charts.
For more information, see "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide.
Improved Processing of Realtime Pipeline Semaphore Files
BRM cloud native now processes semaphore files more efficiently when it contains multiple realtime-pipe pod replicas. Semaphore files are now processed by one replica at a time. For example, assume your cloud native deployment contains multiple replicas of the realtime-pipe pod. When a semaphore file is placed in the common-semaphore PVC directory, one realtime-pipe replica picks up and locks the file. After it finishes processing the file, the replica unlocks it so the next replica can pick up, lock, and process it. This process continues until all replicas have processed the semaphore file.
In previous releases, all realtime-pipe replicas processed the semaphore file in parallel.
You enable realtime-pipe to process semaphore files one replica at a time by setting the TimeBasedChecking key to true in your wirelessrealtime-reg-config ConfigMap. By default, this key is set to true.
The realtime-pipeline pod uses Kubernetes liveness probes to monitor the container and automatically restart it when problems occur. You configure how often the liveness probe checks the container and when to perform restarts using the following keys in your oc-cn-helm-chart/templates/realtime_pipeline.yaml file:
-
LivenessProbe.initialDelaySeconds: Specifies how long to wait in seconds before performing the first liveness probe. Ensure this value is equal to or longer than the semaphore processing time. The default is 10.
-
LivenessProbe.periodSeconds: Specifies the interval in seconds between performing liveness probes. Ensure this value is equal to or longer than the semaphore processing time. The default is 10.
-
LivenessProbe.failureThreshold: Specifies the number of times the liveness probe can fail before triggering a container restart. The default is 2.
Note:
If you set periodSeconds to a value less than the semaphore processing time, you must set failureThreshold to a higher value to prevent unnecessary restarts of the realtime-pipe pod.
ECE Cloud Native Now Allows Configuration of Journal Space
You can now control the amount of space in ECE cloud native deployments that the Oracle Coherence Elastic Data journals use to meet your business needs. By default, ECE cloud native creates journal space for small-to-medium-sized deployments with up to 20,000 TPS. For larger deployments, you may need to increase the size of the journal space.
For more information, see "Managing ECE Journal Storage" in BRM Cloud Native System Administrator's Guide.
oc-cn-init-db-helm-chart Can Now Configure the Database For You
The oc-cn-init-db-helm-chart can now automatically configure your BRM database for demonstration or development systems. To do so, you must set the following keys in your override-values.yaml file before deploying the Helm chart:
-
db.user: The user name of the system administrator.
-
db.password: The password for the system administrator.
-
db.port: The port number of the database server.
Note:
For production systems, your database administrator must create the BRM database manually.
For more information, see "Deploying BRM with a New Database Schema" in BRM Cloud Native Deployment Guide.
New Features in Cloud Native 15.0.1
BRM cloud native 15.0.1 includes the following enhancements:
PDC Deployment Creates Default PDC Groups
When you deploy the oc-cn-op-job-helm-chart Helm chart to create the PDC WebLogic domain, it now creates the following PDC groups:
-
PricingDesignAdmin: This group's users have administrative privileges on PDC. They can perform operations on all PDC UI screens, pricing components, and setup components.
-
PricingAnalyst: This group's users have administrative privileges for pricing components and view-only privileges for setup components.
-
PricingReviewer: This group's users have view-only privileges for all pricing and setup components.
When you create PDC users, you add them to one of these groups to control their access to PDC operations. For example, to create a user that can perform all operations in PDC, you can configure these new values.yaml keys for the oc-cn-op-job-helm-chart Helm chart:
ocpdc: wop: users: name: JohnDoeAdmin description: New Pricing Administrator password: EncodedPassword groups: PricingDesignAdmin
For information, see "Creating PDC Users" in BRM Cloud Native System Administrator's Guide.
PCC Now Supports WebLogic Kubernetes Operator
PCC now supports WebLogic Kubernetes Operator for its cloud native deployment.
For more information on the configuration and testing of the PCC's integration with the WebLogic Operator for cloud-native environment (CNE) deployment, see "Configuring Pipeline Configuration Center" in the BRM Cloud Native Deployment Guide.
Cloud Native Documentation Contains New Instructions
The BRM cloud native documentation has been updated to include instructions for doing the following:
-
Optimizing the PDC system's CPU and memory usage for requests and limits during runtime. See "Using Resource Limits in PDC Domain Pods" in BRM Cloud Native System Administrator's Guide.
-
Changing the language displayed in your PDC UI screens, XML import files, and XML export files. See "Managing Language Packs in PDC Pods" in BRM Cloud Native System Administrator's Guide.
-
Migrating PDC from an on-premises release to a cloud native release. See "Migrating from PDC On Premises to PDC Cloud Native" in BRM Cloud Native Deployment Guide.
-
Setting up ingress and egress flows for ECE cloud native deployments. See "Setting Up ECE Cloud Native Ingress and Egress Flows" in BRM Cloud Native Deployment Guide.
-
Performing a zero-downtime upgrade of a BRM cloud native active-active disaster recovery system. See "Performing Zero-Downtime Upgrades of Disaster Recovery Cloud Native Systems" in BRM Cloud Native Deployment Guide.
-
Creating, editing, deleting, and retrieving custom storable classes and storable class fields in your BRM cloud native database. See "Creating Custom Fields and Storable Classes" in BRM Cloud Native System Administrator's Guide.
Note:
These instructions apply to Release 15.0.0 or later.
New Features in Cloud Native 15.0.0
BRM cloud native 15.0.0 includes the following enhancements:
Images Now Available on Oracle Container Registry
BRM cloud native images are now available on Oracle Container Registry (https://container-registry.oracle.com).
For more information, see "Pulling BRM Images from the Oracle Container Registry" in BRM Cloud Native Deployment Guide.
BRM Cloud Native Enhancements
BRM cloud native includes the following enhancements:
-
BRM cloud native now supports Podman for building container images.
-
The real-time and batch rating engines now support SSL-enabled communication with BRM.
-
BRM cloud native now supports the BRM SDK, which allows you to make and compile customizations for your BRM system.
For more information, see BRM Cloud Native Deployment Guide.
Changing ECE Configuration During Runtime
You can now use Kubernetes jobs to do the following during ECE runtime without requiring you to restart ECE pods:
-
Reload configuration information from the charging-settings.xml file into ECE cache
-
Change grid-level logging options
For more information, see "Changing the ECE Configuration During Runtime" in BRM Cloud Native System Administrator's Guide.
PDC Cloud Native Deployment Enhancements
PDC cloud native includes the following enhancements:
-
Deploying and configuring Pricing Design Center cloud native services no longer requires manual post-installation tasks. These tasks are now automated in the PDC cloud native deployment process.
-
PDC cloud native images have been decoupled from Fusion Middleware images, reducing the overall size of the PDC images. To use PDC cloud native, you now download the Fusion Middleware images and provide their location in your PDC override-values.yaml file.
For more information, see "Configuring Pricing Design Center" in BRM Cloud Native Deployment Guide.