10 Configuring ECE Services

Learn how to configure Oracle Communications Elastic Charging Engine (ECE) services by configuring and deploying the ECE Helm chart.

Topics in this document:

For information about performing administrative tasks on your ECE cloud native services, see "Administering ECE Cloud Native Services" in BRM Cloud Native System Administrator's Guide.

Before installing the ECE Helm chart, you must first publish the metadata, config, and pricing data from the PDC pod.

Note:

Kubernetes looks for the CPU limit setting for pods. If it's not set, Kubernetes allocates a default value of 1 CPU per pod, which causes CPU overhead and Coherence scalability issues. To prevent this from happening, override each ECE pod's CPU limit to be the maximum CPU available on the node.

Adding Elastic Charging Engine Keys

Table 10-1 lists the keys that directly impact ECE deployment. Add these keys to your override-values.yaml file for oc-cn-ece-helm-chart. In the table, component-name should be replaced with the name of the ECE component, such as emgateway, radiusgateway, diametergateway, httpgateway, and ratedeventformatter.

Table 10-1 Elastic Charging Engine Keys

Key Path in values.yaml File Description

imagePullPolicy

container

The default value is IfNotPresent, which specifies to not pull the image if it's already present. Applicable values are IfNotPresent and Always.

containerPort

container

The port number that is exposed by this container.

createWalletsAsSecrets

N/A

Whether to create KeyStore certificates and wallets as Secrets during the ECE deployment process (true) or if they have been pre-created as Kubernetes Secrets (false).

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

chargingSettingManagementPath

volume

The location of the management folder, which contains the charging-settings.xml, test-tools.xml, and migration-configuration.xml files.

The default is /home/charging/opt/ECE/oceceserver/config/management.

chargingSettingPath

volume

The location of the configuration folder for ECE. The default is /home/charging/opt/ECE/oceceserver/config.

extECESecret

secretEnv

The name of the external Kubernetes Secret for ECE.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

walletPassword

secretEnv

The string password for opening the wallet.

JMSQUEUEPASSWORD

secretEnv

The password for the JMS queue, which is stored under the key jms.queue.notif.pwd in the wallet.

RADIUSSHAREDSECRET

secretEnv

The RADIUS secret password, which is stored as radius.secret.pwd in the wallet.

BRMGATEWAYPASSWORD

secretEnv

The BRM Gateway password.

PDCPASSWORD

secretEnv

The PDC password, which is stored as pdc.pwd in the wallet.

Note: This key must match the pdcAdminUserPassword key in the override-values.yaml file for oc-cn-helm-chart.

PDCKEYSTOREPASSWORD

secretEnv

The PDC KeyStore password, which is stored as pdc.keystore.pwd in the wallet.

Note: This key must match the keyStoreIdentityStorePass key in the override-values.yaml file for oc-cn-helm-chart.

PERSISTENCEDATABASEPASSWORD

secretEnv

The database schema user password. This user is created using ece-persistence-job if it doesn't exist in the database.

ECEHTTPGATEWAYSERVERSSLKEYSTOREPASSWORD

secretEnv

The server SSL KeyStore password for the HTTP Gateway.

BRM_SERVER_WALLET_PASSWD

secretEnv

The password to open the BRM server wallet.

BRM_ROOT_WALLET_PASSWD

secretEnv

The root wallet password of the BRM wallet.

BRMDATABASEPASSWORD

secretEnv

The password for the BRM database.

If you are connecting ECE to a BRM multischema database, use these entries instead:

BRMDATABASEPASSWORD:  
   - schema: 1     
     PASSWORD: Password   
   - schema: 2     
     PASSWORD: Password

where:

  • schema is the schema number. Enter 1 for the primary schema, 2 for the secondary schema, and so on.
  • PASSWORD is the schema password.

SSLENABLED

sslconnectioncertificates

Whether SSL is enabled in ECE (true) or not (false).

DNAME

sslconnectioncertificates

The domain name. For example: "CN=Admin, OU=Oracle Communication Application, O=Oracle Corporation, L=Redwood Shores, S=California, C=US"

SSLKEYSTOREVALIDITY

sslconnectioncertificates

The validity of the KeyStore, in days. A value of 200 indicates that the validity is 200 days.

runjob

job.sdk

Whether the SDK job needs to be run as part of the deployment (true) or not (false). The default value is false.

If set to true, a default SDK job is run as part of the Helm installation or upgrade.

schemaLoader.resources

job

The minimum and maximum CPU and memory resources for ece-persistence-job and ece-persistence-upgrade-job. See "Setting Minimum and Maximum CPU and Memory Values" in BRM Cloud Native System Administrator's Guide.

serviceFqdn

emgateway

The default is ece-emg.

tlsVersion

customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration

The TLS version to support, such as 1.2 or 1.3

extBRMDBSSLWalletSecret

customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration.extBRMDBSSLWalletSecret

The name of the external Kubernetes Secret containing the SSL TrustStore.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

replicas

component-name.component-nameList

The number of replicas to be created while deploying the chart. The default replica count is 3 for ecs server, and 1 for all other components.

coherenceMemberName

component-name.component-nameList

The Coherence member name under which this component will be added to the Coherence cluster.

jmxEnabled

component-name.component-nameList

Whether the component is JMX-enabled (true) or not (false).

coherencePort

component-name.component-nameList

The optional value indicating the Coherence port used by the component.

jvmGCOpts

component-name.component-nameList

This field helps to provide the Java JVM options such as GC details, max memory, and min memory.

jvmJMXOpts

component-name.component-nameList

This field helps to provide the JMX-related option.

jvmCoherenceOpts

component-name.component-nameList

This field helps to provide the Coherence-related options such as the override file and cache config file.

jvmOpts

component-name.component-nameList

This field is empty by default, and any additional JVM arguments can be provided here.

extServerSSLKeyStoreSecret

extHttpIdentityKeystoreSecret

extHttpTruststoreSecret

extNrfPublicKeyLocationSecret

httpgateway

The names of the external Kubernetes Secrets for the HTTP Gateway.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

labels

charging

The label for all pods in the deployment. The default value is ece.

jmxport

charging

The JMX port exposed by ece, which can be used to log in to JConsole. The default is 31022.

terminationGracePeriodSeconds

charging

Used for graceful shutdown of the pods. The default value is 180 seconds.

persistenceEnabled

charging

Whether to persist the ECE cache data into the Oracle database. The default is true.

See "Enabling Persistence in ECE" in BRM Cloud Native System Administrator's Guide for more information.

hpaEnabled

charging

Whether to enable autoscaling using Kubernetes Horizontal Pod Autoscaler.

See "Setting Up Autoscaling of ECE Pods" in BRM Cloud Native System Administrator's Guide for more information.

timeoutSurvivorQuorum

charging

The minimum number of cluster members that must remain in the cluster when the cluster service is terminating suspect members, without data loss. The default is 3.

To calculate the minimum number, use this formula:

(chargingServerWorkerNodes – 1) * (sum of all ecs pods/chargingServerWorkerNodes)

chargingServerWorkerNodes

charging

The number of charging server worker nodes. The default is 3.

primary.*

charging.cluster

The details about your primary cluster:
  • clusterName: The name of the primary cluster.
  • eceServiceName: The ECE service name that creates the Kubernetes cluster with all of the ECE components in the primary cluster. The default is ece-server.
  • eceServicefqdnOrExternalIP: The fully qualified domain name (FQDN) or external IP address of the ECE service running in the primary cluster. For example: ece-server.NameSpace.svc.cluster.local.

secondary.*

charging.cluster

The details about your secondary cluster:
  • clusterName: The name of the secondary cluster.
  • eceServiceName: The ECE service name that creates the Kubernetes cluster with all of the ECE components in the secondary cluster. The default is ece-server.
  • eceServicefqdnOrExternalIP: The fully qualified domain name (FQDN) or external IP address of the ECE service running in the secondary cluster. For example: ece-server.NameSpace.svc.cluster.local.

extPersistenceDBSSLWalletSecret

charging.server.connectionConfigurations.OraclePersistenceConnectionConfigurations

The name of the external Kubernetes Secret containing the ECE database SSL TrustStore.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

extKafkaTrustStoreSecret

charging.kafkaConfigurations.kafkaConfigurationList

The name of the external Kubernetes Secret containing the server Kafka SSL KeyStore.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

<tags>

migration

The different tags indicating the values that will be stored under migration-configuration.xml. The tag names are the same as the ones used in the migration-configuration.xml file for ease of mapping.

extPDCKeyStoreSecret

migration.loader.pricingUpdater

The name of the external Kubernetes Secret containing the server PDC SSL KeyStore.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

<tags>

testtools

The different tags indicating the values that will be stored under test-tools.xml. The tag names are the same as the ones used in the test-tools.xml file for ease of mapping.

<module>

log4j2.logger

The different log levels for each module represents the logging level for the corresponding module.

<tags>

eceproperties

The different tags indicating the values that will be stored under ece.properties. The tag names are the same as the ones used in the ece.properties file for ease of mapping.

NotificationQueue.extJMSKeyStoreSecret

BRMGatewayNotificationQueue.extJMSKeyStoreSecret

DiameterGatewayNotificationQueue.extJMSKeyStoreSecret

JMSConfiguration

The names of the external Kubernetes Secrets for the gateway queues.

See "About Using External Kubernetes Secrets" in BRM Cloud Native System Administrator's Guide for more information.

<tags>

JMSConfiguration

The different tags indicating the values that will be stored under JMSConfiguration.xml. The tag names are the same as the ones used in the JMSConfiguration.xml file for ease of mapping.

name

secretEnv

The user-defined name to give for the Secrets. The default is secret-env.

SSLENABLED

sslconnectioncertificates

Whether to install ECE under SSL mode (true) or not (false). The default is true.

external.*

pv

The details about the external persistent volume (PV).

  • name: The name of the external PV. The default is external-pv.

  • createOption: By default, the PV uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

  • accessModes: The access mode for the PV. The default is ReadWriteMany.

  • capacity: The maximum capacity of the external PV. The default is 500Mi.

logs.*

pvc

The details about the persistent volume claim (PVC) for log files.

  • name: The name for the ECE log files. The default is logs-pv.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

  • createOption: By default, the PVC uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

brmconfig.*

pvc

The details about the PVC for BRM configuration files.

  • name: The name of the BRM Config PVC, in which all BRM configuration files such as the payload configuration file are exposed outside of the pod. The default is brmconfig-pvc.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

  • createOption: By default, the PVC uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

sdk.*

pvc

The details about the PVC for the SDK files.

  • name: The name for the SDK PVC, in which all of the SDK files such as the configuration, sample script, and source files are exposed to the user. The default is sdk-pvc.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

  • createOption: By default, the PVC uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

cdrformatter.*

pvc

The details about the PVC for the CDR formatter files.

  • name: The name for the CDR formatter PVC. The default is cdrformatter-pvc.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

  • createOption: By default, the PVC uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

wallet.*

pvc

The details about the PVC for wallet files.

  • name: The name for the wallet PVC, in which the wallet directory will be stored and shared by all of the ecs pods. The default is ece-wallet-pvc.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

  • createOption: By default, the PVC uses dynamic volumes. To use a static volume instead, you must add the createOption key. See "Using Static Volumes" in BRM Cloud Native System Administrator's Guide.

external.*

pvc

The details about the PVC for external files.

  • name: The name for the external PVC. The default is external-pvc.

  • accessModes: The access mode for the PVC. The default is ReadWriteMany.

  • storage: The storage space required initially to create this PVC. If the storage specified here is not available in the machine, ensure that the PVC is not created and that the pods do not get initialized. The default is 500Mi.

name

storageClass

The name of the storage class for dynamic volume provisioning.

Enabling SSL in Elastic Charging Engine

Note:

For more information about securing communications between ECE and external applications, see "Securing ECE Communication" in BRM Cloud Native System Administrator's Guide.

To complete the configuration for SSL setup in ECE:

  1. Configure the SSL KeyStores by doing one of the following:

    • Generate the Identity and Trust KeyStores and then move your files, such as identity.p12 and trust.p12, under the oc-cn-ece-helm-chart/ece/keystore directory.

    • Pre-create the Kubernetes Secret for the Identity and Trust KeyStore files and set the secretEnv.extECESecret key in your override-values.yaml file for both oc-cn-ece-helm-chart.

      For more information, see "Managing KeyStore Certificates and Wallets" in BRM Cloud Native System Administrator's Guide.

  2. Set these keys in the override-values.yaml file for oc-cn-ece-helm-chart:

    • sslconnectioncertificates.SSLENABLED: Set this to true.

    • sslEnabled: Set this to true in emGatewayConfigurations, httpGatewayConfigurations, and BRMConnectionConfiguration.

    • migration.pricingUpdater.keyStoreLocation: Set this to /home/charging/opt/ECE/oceceserver/config/client.jks.

    • charging.brmWalletServerLocation: Set this to /home/charging/wallet/brmwallet/server/cwallet.sso.

    • charging.brmWalletClientLocation: Set this to /home/charging/wallet/brmwallet/client/cwallet.sso.

    • charging.brmWalletLocation: Set this to /home/charging/wallet/brmwallet.

    • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway1Config.wallet: Set this to the BRM wallet location.

    • charging.emGatewayConfigurations.emGatewayConfigurationList.emGateway2Config.wallet: Set this to the BRM wallet location.

    • charging.radiusGatewayConfigurations.wallet: Set this to the BRM wallet location.

    • charging.connectionConfigurations.BRMConnectionConfiguration.brmwallet: Set this to the BRM wallet location.

  3. Deploy your ECE cloud native services by following the instructions in "Deploying BRM Cloud Native Services".

Connecting ECE Cloud Native to an SSL-Enabled Database

The steps for connecting ECE cloud native to an SSL-enabled database depends on where you save your SSL certificates for various components, such as the Kafka DM, WebLogic Server, PDC, and the persistence database. The certificates can be stored in the following locations:
  • The ECE Helm chart. In this case, the ECE Helm chart creates the certificates as Kubernetes Secrets during the deployment process.

  • The ECE Persistent Volumes (PVs)

You specify where the SSL certificates are located by using the createWalletsAsSecrets key in your override-values.yaml file for oc-cn-ece-helm-chart. Set the key to true if your SSL certificates are in the ECE Helm chart, and to false if they are in PVs.

To connect your ECE cloud native services to an SSL-enabled Oracle database:

  1. If your SSL certificates are in the ECE PVs (createWalletsAsSecrets is false), do the following:
    1. Prepare for persistence schema creation.

      1. Go to the oc-cn-ece-helm-chart directory, and create a directory named secrets/db_wallets/ece_ssl_db_wallet/schema1.

      2. Save the ECE SSL database wallet to the schema1 directory.

      3. From the oc-cn-ece-helm-chart directory, grant the necessary permissions to the secrets directory:

        chmod -R 775 secrets
      4. (For multischema systems only) Create a directory named schema2 in the ece_ssl_db_wallet directory and copy the ECE SSL database wallet to the schema2 directory.

    2. Configure the SSL database wallets in the external volume mount.

      1. Go to the external volume mount location (external-pvc).

      2. Create a directory named ece_ssl_db_wallet/schema1.

      3. Save the ECE SSL database wallet to the ece_ssl_db_wallet/schema1 directory.

      4. From the external volume mount location, create a directory named brm_ssl_db_wallet/schema1.

      5. Save the BRM SSL database wallet to the brm_ssl_db_wallet/schema1 directory.

      6. From the external volume mount location, grant the necessary permissions to both new directories:

        chmod -R 775 ece_ssl_db_wallet brm_ssl_db_wallet
      7. (For multischema systems only) Create a schema2 directory inside both the ece_ssl_db_wallet and brm_ssl_db_wallet directories. Then, copy the ECE SSL certificates to the ece_ssl_db_wallet/schema2 directory, and the BRM SSL certificates to the brm_ssl_db_wallet/schema2 directory.

  2. If your SSL certificates are in the ECE Helm chart (createWalletsAsSecrets is true), do the following:
    1. Place your ECE certificate files in the appropriate oc-cn-ece-helm-chart/secrets/db_wallets/ece_ssl_db_wallet/scheman directory, where n is the schema number.

    2. Place your BRM certificate files in the appropriate oc-cn-ece-helm-chart/secrets/db_wallets/brm_ssl_db_wallet/scheman directory, where n is the schema number.

  3. Configure ECE for an SSL-enabled Oracle persistence database.

    Under the charging.connectionConfigurations.OraclePersistenceConnectionConfigurations section, set the following keys:

    • dbSSLEnabled: Set this to true.

    • dbSSLType: Set this to the type of SSL connection required for connecting to the database: oneway, twoway, or none.

    • sslServerCertDN: Set this to the SSL server certificate distinguished name (DN). The default is DC=local,DC=oracle,CN=pindb.

    • trustStoreLocation:
      1. If createWalletsAsSecrets is false, set this to /home/charging/ext/ece_ssl_db_wallet/schema1/cwallet.sso.

      2. If createWalletsAsSecrets is true, set this to certificate file name.

    • trustStoreType: Set this to the TrustStore file type: SSO or PKCS12.

  4. Configure customerUpdater for an SSL-enabled Oracle AQ database queue.

    Under the customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration section, set the following keys:

    • dbSSLEnabled: Set this to true.

    • dbSSLType: Set this to the type of SSL connection required for connecting to the database: oneway, twoway, or none.

    • sslServerCertDN: Set this to the SSL server certificate distinguished name (DN). The default is DC=local,DC=oracle,CN=pindb.

    • trustStoreLocation:
      1. If createWalletsAsSecrets is false, set this to /home/charging/ext/brm_ssl_db_wallet/schema1/cwallet.sso.

      2. If createWalletsAsSecrets is true, set this to certificate_file_name.

    • trustStoreType: Set this to the TrustStore file type: SSO or PKCS12.

    Note:

    For database connectivity, ECE supports only the database service name and not the database service ID. Therefore, set the following keys to the database service name:

    • charging.connectionConfigurations.OraclePersistenceConnectionConfigurations.sid

    • customerUpdater.customerUpdaterList.oracleQueueConnectionConfiguration.sid

  5. Configure your Oracle database to connect to the SSL-enabled BRM and ECE databases:

    1. Copy the tnsnames.ora and sqlnet.ora files from your SSL database host to your ECE cloud native instance.

    2. On your ECE cloud native instance, go to the ECE Helm chart directory.

    3. Create the ora_files/ece and ora_files/brm directories.

    4. Copy the ECE database tnsnames.ora and sqlnet.ora files to the oc-cn-ece-helm-chart/ora_files/ece/ directory.

    5. In the oc-cn-ece-helm-chart/ora_files/ece/sqlnet.ora file, set the wallet location to /home/charging/opt/ECE/oceceserver/config/ece_ssl_db_wallet/schema1:

      WALLET_LOCATION =
         (SOURCE =
           (METHOD = FILE)
           (METHOD_DATA =
             (DIRECTORY = /home/charging/opt/ECE/oceceserver/config/ece_ssl_db_wallet/schema1)
             )
         )
    6. Copy the BRM database tnsnames.ora and sqlnet.ora files to the oc-cn-ece-helm-chart/ora_files/brm directory.

    7. In the oc-cn-ece-helm-chart/ora_files/brm/sqlnet.ora file, set the wallet location to /home/charging/opt/ECE/oceceserver/config/brm_ssl_db_wallet/schema1:

      WALLET_LOCATION =
         (SOURCE =
           (METHOD = FILE)
           (METHOD_DATA =
             (DIRECTORY = /home/charging/opt/ECE/oceceserver/config/brm_ssl_db_wallet/schema1)
             )
         )
    8. From the ECE Helm chart directory, grant the permissions for the ora_files directory:

      chmod -R 775 ora_files/
    9. Copy the ora_files directory to the external PV mount location.

  6. If createWalletsAsSecrets is true, do the following in the ECE Cloud Native environment:
    1. Place the SSL Certificates in the following directory structure inside the Helm chart:
      • secrets/jms/brmgateway/

      • secrets/jms/diametergateway/

      • secrets/jms/ecs/

      • secrets/kafka/

      • secrets/pdc/

    2. If the SSL certificates are deployed as external secrets, provide the respective secret names and certificate keys in your values.yaml file or in an override file:
      secretEnv: 
        extECESecret: external_secret_name
      customerUpdater: 
        customerUpdaterList: 
          - oracleQueueConnectionConfiguration:
               trustStoreLocation: certificate_key
               extBRMDBSSLWalletSecret: external_secret_name 
      charging: 
        connectionConfigurations: 
          oraclePersistenceConnectionConfigurations: 
            - trustStoreLocation: certificate_key 
              extPersistenceDBSSLWalletSecret: external_secret_name
      charging:
        kafkaConfigurations:
          kafkaConfigurationList:
            - kafkaTrustStoreLocation: certificate_key
              extKafkaTrustStoreSecret: external_secret_name
      migration:
        pricingUpdater:
          keyStoreLocation: certificate_key
          extPDCKeyStoreSecret: external_secret_name
      JMSConfiguration:
        NotificationQueue:
          - KeyStoreLocation: certificate_key
            extJMSKeyStoreSecret: external_secret_name
        BRMGatewayNotificationQueue:
          - KeyStoreLocation: certificate_key
            extJMSKeyStoreSecret: external_secret_name
        DiameterGatewayNotificationQueue:
          - KeyStoreLocation: certificate_key
            extJMSKeyStoreSecret: external_secret_name

      For more information on this configuration for the HTTP Gateway, see "Adding Elastic Charging Engine Keys".

About Elastic Charging Engine Volume Mounts

Note:

You must use a provisioner that has ReadWriteMany access and sharing between pods.

The ECE container requires Kubernetes volume mounts for third-party libraries. The third-party volume mount shares the third-party libraries required by ECE from the host system with the container file system. For the list of third-party libraries to download, see BRM Compatibility Matrix. Place the library files under the third-party volume mount.

The default configuration comes with a hostPath PersistentVolume. For more information, see "Configure a Pod to Use a PersistentVolume for Storage" in Kubernetes Tasks.

To use a different type of PersistentVolume, modify the oc-cn-ece-helm-chart/templates/ece-pvc.yaml file.

Loading Custom Diameter AVP

To load custom Diameter AVPs into your ECE cloud native environment:

  1. Create a diameter directory inside external-pvc.

  2. Move the custom AVP file, such as dictionary_custom.xml, to the diameter directory.

  3. If you need to load a custom AVP after ECE is set up, restart the diametergateway pod by doing the following:

    1. Increment the diametergateway.diametergatewayList..restartCount key by 1.

    2. Run the helm upgrade command to update the release.

Generating CDRs for Unrated Events

By default, the httpgateway pod sends all 5G usage requests to the ecs pod for online and offline charging.

You can configure httpgateway to convert some 5G usage requests into call detail record (CDR) files based on the charging type. You can then send the CDR files to roaming partners, a data warehousing system, or legacy billing systems for rating. For more information, see "About Generating CDRs" in ECE Implementing Charging.

You use the following to generate CDRs:

  • httpgateway pod

  • cdrgateway pod

  • cdrFormatter pod

  • CDR database

The cdrgateway and cdrFormatter pods can be scaled together, with one each per schema, or independently of the schemas. For more information, see "Scaling the cdrgateway and cdrFormatter Pods".

For details about the CDR format, see "CHF-CDR Format" in ECE 5G CHF Protocol Implementation Conformance Statement.

To set up ECE cloud native to generate CDRs:

  1. Configure your httpgateway pod to do the following:

    • Generate CDRs (set cdrGenerationEnabled to true).

    • Route offline charging requests to the ecs pod for rating (set rateOfflineCDRinRealtime to true) or to the cdrgateway pod for generating CDRs (set rateOfflineCDRinRealtime to false).

    • Route online charging requests to the ecs pod for rating (set generateCDRsForOnlineRequests to false) or to the cdrgateway pod for generating CDRs (set generateCDRsForOnlineRequests to true).

  2. Configure the cdrgateway pod to connect to the CDR database and do the following:

    • Generate individual CDR records for each request (set individualCdr to true) or aggregate multiple requests into a CDR record based on trigger criteria (set individualCdr to false). For information about the trigger criteria, see "About Trigger Types" in ECE Implementing Charging.

    • Store CDR records in an Oracle NoSQL database (set isNoSQLConnection to true) or in an Oracle database (set isNoSQLConnection to false).

  3. Configure the cdrFormatter pod to do the following:

    • Retrieve batches of CDR records from the CDR database and pass them to a specified cdrFormatter plug-in for processing.

    • Purge processed CDR records from the CDR database older than a specified number of days (configured in retainDuration).

    • Purge orphan CDR records from the CDR database.

      Orphan CDR records are incomplete ones that are older than a specified number of seconds (configured in cdrOrphanRecordCleanupAgeInSec). Orphan CDR records can be created when your ECE system goes down due to maintenance or failure.

  4. Configure the cdrFormatter plug-in to do the following:

    • Write a specified number of CDR records to each CDR file (set maxCdrCount to the maximum number).

    • Create JSON-formatted CDR files and then store them in your file system (set enableDiskPersistence to true) or send them to your Kafka messaging service (set enableKafkaIntegration to true).

To generate CDRs in ECE cloud native, you configure the following entries in your override-values.yaml file. This example configures:

  • httpgateway to route both online and offline charging requests to cdrgateway.

  • cdrgateway to aggregate multiple requests into a CDR record and then store it in an Oracle NoSQL database.

  • cdrFormatter to retrieve CDR records in batches of 2500 from the Oracle NoSQL database and then send them to the default plug-in module. Immediately after CDR records are retrieved, cdrFormatter purges them from the database. It would also purge orphan records older than 200 seconds from the database.

  • The cdrFormatter plug-in to create CDR files with a maximum of 20000 CDR records and an .out file name extension. It would store them in your file system in the path /home/charging/cdr_input.

cdrFormatter:
 cdrFormatterList:
   - schemaNumber: "1"
     replicas: 1
     coherenceMemberName: "cdrformatter1"
     jmxEnabled: true
     jvmGCOpts: "-XX:+UnlockExperimentalVMOptions  -XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
     jvmOpts: "-Xms16g -Xmx20g -Dece.metrics.http.service.enabled=true"
     cdrFormatterConfiguration:
       name: "cdrformatter1"
       clusterName: "BRM"
       primaryInstanceName: "cdrformatter1"
       partition: "1"
       isNoSQLConnection: "true"
       noSQLConnectionName: "noSQLConnection"
       connectionName: "oraclePersistence1brm"
       threadPoolSize: "6"
       retainDuration: "0"
       ripeDuration: "60"
       checkPointInterval: "6"
       maxPersistenceCatchupTime: "0"
       pluginPath: "ece-cdrformatter.jar"       
       pluginType: "oracle.communication.brm.charging.cdr.formatterplugin.internal.SampleCdrFormatterCustomPlugin"       
       pluginName: "cdrFormatterPlugin1"
       noSQLBatchSize: "2500"
       cdrStoreFetchSize: "2500"
       cdrOrphanRecordCleanupAgeInSec:"200"
       cdrOrphanRecordCleanupSleepIntervalInSec: "200"
       enableIncompleteCdrDetection: "false"       
 
cdrgateway:
  cdrgatewayList:
    - coherenceMemberName: "cdrgateway1"
      replicas: 6
      jmxEnabled: true
      jvmGCOpts: "-XX:+UnlockExperimentalVMOptions  -XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
      jvmJMXOpts: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false"
      jvmCoherenceOpts: "-Dpof.config=charging-pof-config.xml -Dcoherence.override=charging-coherence-override-dev.xml  -Dcoherence.security=false -Dsecure.access.name=admin"
      jvmOpts: "-Xms6g -Xmx8g -Dece.metrics.http.service.enabled=true -DcdrServerCorePoolSize=64 -Dserver.sockets.metrics.bind-address=0.0.0.0 -Dece.metrics.http.port=19612"
      restartCount: "0"
      cdrGatewayConfiguration:
        name: "cdrgateway1"
        clusterName: "BRM"
        primaryInstanceName: "cdrgateway1"
        schemaNumber: "1"
        isNoSQLConnection: "true"
        noSQLConnectionName: "noSQLConnection"
        connectionName: "oraclePersistence1"
        cdrPort: "8084"
        cdrHost: "ece-cdrgatewayservice"
        individualCdr: "false"
        cdrServerCorePoolSize: "32"
        cdrServerMaxPoolSize: "256"
        enableIncompleteCdrDetection: "false"
        retransmissionDuplicateDetectionEnabled: "false"
 
httpgateway:
   cdrGenerationEnabled: "true"
   cdrGenerationStandaloneMode: "true"
   rateOfflineCDRinRealtime: "false"   
   generateCDRsForOnlineRequests: "true"
   httpgatewayList:
      - coherenceMemberName: "httpgateway1"
        replicas: 8
        maxreplicas: 8
        jvmGCOpts: "-XX:+AlwaysPreTouch -XX:G1RSetRegionEntries=2048 -XX:ParallelGCThreads=10 -XX:+ParallelRefProcEnabled    -XX:MetaspaceSize=100M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:-UseGCLogFileRotation -XX:+UseG1GC -XX:NumberOfGCLogFiles=99"
        jvmOpts: "-Xms10g -Xmx14g -Djava.net.preferIPv4Addresses=true -Dece.metrics.http.service.enabled=true -Dserver.sockets.metrics.bind-address=0.0.0.0 -Dece.metrics.http.port=19612"
        httpGatewayConfiguration:
           name: "httpgateway1"
           processingThreadPoolSize: "200"
           processingQueueSize: "32768"
           kafkaBatchSize: "10"
 
   connectionConfigurations:
         OraclePersistenceConnectionConfigurations:
              retryCount: "1"
              retryInterval: "1"
              maxStmtCacheSize: "100"
              connectionWaitTimeout: "3000"
              timeoutConnectionCheckInterval: "3000"
              inactiveConnectionTimeout: "3000"
              databaseConnectionTimeout: "6000"
              persistenceInitialPoolSize: "4"
              persistenceMinPoolSize: "4"
              persistenceMaxPoolSize: "20"
              reloadInitialPoolSize: "0"
              reloadMinPoolSize: "0"
              reloadMaxPoolSize: "20"
              ratedEventFormatterInitialPoolSize: "6"
              ratedEventFormatterMinPoolSize: "6"
              ratedEventFormatterMaxPoolSize: "24"

charging:
   cdrFormatterPlugins:
     cdrFormatterPluginConfigurationList:
       cdrFormatterPluginConfiguration:
         name: "cdrFormatterPlugin1"
         tempDirectoryPath: "/tmp/tmp"
         doneDirectoryPath: "/home/charging/cdr_input"
         doneFileExtension: ".out"
         enableKafkaIntegration: "false"
         enableDiskPersistence: "true"
         maxCdrCount: "20000"
         staleSessionCauseForRecordClosingString: "PARTIAL_RECORD"
         enableStaleSessionCleanupCustomField: "false"

Scaling the cdrgateway and cdrFormatter Pods

To increase performance and throughput, you can scale the cdrgateway and cdrFormatter pods together, with one each per schema, or scale them independently of the schemas.

Figure 10-1 shows an example of scaled cdrgateway and cdrFormatter pods that have CDR storage in an Oracle Database. This example contains:

  • One cdrgateway multi-replica deployment for all ECE schemas. All cdrgateway replicas have a single CDR Gateway service acting as a front end to httpgateway.

  • One cdrFormatter single-replica deployment for each ECE schema. Each cdrFormatter reads persisted CDRs from its associated ECE schema.

httpgateway forwards CDR requests to cdrgateway replicas in round-robin fashion. In this example, cdrgateway replicas 1-0, 1-1, and 1-2 persist CDRs in schema 1 tables, and replicas 1-3, 1-4, and 1-5 persist CDRs in schema 2 tables.

Figure 10-1 Scaled Architecture with an Oracle Database



Figure 10-2 shows an example of scaled cdrgateway and cdrFormatter pods that have CDR storage in an Oracle NoSQL Database. This example contains:

  • One cdrgateway multi-replica deployment for all ECE schemas. All cdrgateway replicas have a single CDR Gateway service acting as a front end to the httpgateway.

  • One cdrFormatter single-replica deployment for each major key partition in the ECE schema. Each cdrFormatter reads persisted CDRs from its associated partition.

Figure 10-2 Scaled Architecture with a NoSQL Database



Configuring ECE to Support Prepaid Usage Overage

You can configure ECE cloud native to capture any overage amounts by prepaid customers during an active session, which can help you prevent revenue leakage. If the network reports that the number of used units during a session is greater than a customer's available allowance, ECE cloud native charges the customer up to the available allowance. It can then create an overage record with information about the overage amount and sends it to the ECE Overage topic. You can create a custom solution for reprocessing the overage amount later on.

For example, assume a customer has a prepaid balance of 100 minutes, but uses 130 minutes during a session. ECE cloud native would charge the customer for 100 minutes, create an overage record for the remaining 30 minutes of usage, and then write the overage topic to the ECE Overage Kafka topic.

When the prepaid usage overage is disabled, ECE cloud native charges the customer for the full amount regardless of the amount of funds in the customer's balance.

To configure ECE cloud native to support prepaid usage overage, do the following:

  • Ensure that ECE cloud native is connected to your Kafka Server

  • Enable ECE cloud native to support prepaid usage overage

  • Create an ECE Overage topic in your Kafka Server

To do so, set the following keys in your override-values.yaml file for oc-cn-ece-helm-chart and then run the helm upgrade command:

  • charging.kafkaConfigurations.kafkaConfigurationList.*: Specify how to connect ECE to your Kafka Server.

  • charging.server.checkReservationOverImpact: Set this to true.

  • charging.kafkaConfigurations.kafkaConfigurationList.overageTopicName: Set this to the name of the Kafka topic where ECE will publish overage records.

Note:

If your system does not contain Kafka topics, you can configure ECE to push overage details to a separate log file instead. To do so, in your override-values.yaml file for oc-cn-ece-helm-chart, set the charging.ecs.jvmOpts key to -Deceoveragelogdir=/home/charging/opt/ECE/oceceserver/logs.

Recording Failed ECE Usage Requests

ECE cloud native may occasionally fail to process usage requests. For example, a data usage request could fail because a customer has insufficient funds. You can configure ECE cloud native to publish details about failed usage requests, such as the user ID and request payload, to the ECE failure topic in your Kafka server. Later on, you can reprocess the usage requests or view the failure details for analysis and reporting.

To configure ECE cloud native to record failed ECE usage requests:

  • Ensure that ECE cloud native is connected your Kafka Server

  • Enable the recording of failed ECE usage requests

  • Create an ECE failure topic in your Kafka Server

To do so, set the following keys in your override-values.yaml file for oc-cn-helm-chart:
  • charging.kafkaConfigurations.kafkaConfigurationList.*: Specify how to connect ECE to your Kafka Server.

  • charging.kafkaConfigurations.kafkaConfigurationList.persistFailedRequestsToKafkaTopic: Set this to true.

  • charging.kafkaConfigurations.kafkaConfigurationList.failureTopicName: Set this to the name of the topic that stores information about failed ECE usage requests.

Loading BRM Configuration XML Files

BRM is configured by using the pin_notify and payloadconfig_ece_sync.xml files. To ensure that the BRM pod can access these files for configuring the EAI Java Server (eai_js), they are exposed through the brm_config PVC within the pricingupdater pod. When new metadata is synchronized with ECE, if there are updates to the payload configuration file, it will create a new file in the location which can be accessed and configured in BRM.

For more information, see "Enabling Real-Time Synchronization of BRM and ECE Customer Data Updates" in ECE Implementing Charging.

Setting Up Notification Handling in ECE

You can configure ECE cloud native to send notifications to a client application or an external application during an online charging session. For example, ECE cloud native could send a notification when a customer has breached a credit threshold or when a customer needs to request reauthorization.

You can set up ECE cloud native to send notifications by using either Apache Kafka topics or Oracle WebLogic queues:

Creating an Apache Kafka Notification Topic

To create notification topics in Apache Kafka:

  1. Create these Kafka topics either in the Kafka entrypoint.sh script or after the Kafka pod is ready:

    • kafka.topicName: ECENotifications

    • kafka.suspenseTopicName: ECESuspenseQueue

  2. In the ZooKeeper runtime ConfigMap, set the ece-zookeeper-0.ece-zookeeper.ECENameSpace.svc.cluster.local key to the name of the Kafka Cluster.

  3. Set these Kafka and ZooKeeper-related environment variables appropriately:

    • KAFKA_PORT: Set this to the port number in which Apache Kafka is up and running.

    • KAFKA_HOST_NAME: Set this to the host name of the machine in which Apache Kafka is up and running. If it contains multiple Kafka brokers, create a comma-separated list.

    • REPLICATION_FACTOR: Set this to the number of topic replications to create.

    • PARTITIONS: Set this to the total number of Kafka partitions to create in your topics. The recommended number to create is calculated as follows:

      [(Max Diameter Gateways * Max Peers Per Gateway) + (1 for BRM Gateway) + Internal Notifications]

    • TOPIC_NAME: Set this to ECENotifications. This is the name of the Kafka topic where ECE will publish notifications.

    • SUSPENSE_TOPIC_NAME: Set this to ECESuspenseQueue. This is the name of the Kafka topic where BRM will publish failed notifications and will retry later.

    • ZK_CLUSTER: Set this to the name of your ZooKeeper cluster. This should match the value you set in step 2.

    • ZK_CLIENT_PORT: Set this to the port number in which ZooKeeper listens for client connections.

    • ZK_SERVER_PORT: Set this to the port number of the ZooKeeper server.

  4. Ensure that the Kafka and ZooKeeper pods are in a READY state.

  5. Set these keys in your override-values.yaml file for oc-cn-ece-helm-chart:

    • charging.server.kafkaEnabledForNotifications: Set this to true.

    • charging.server.kafkaConfigurations.name: Set this to the name of your ECE cluster.

    • charging.server.kafkaConfigurations.hostname: Set this to the host name of the machine on which Kafka is up and running.

    • charging.server.kafkaConfigurations.topicName: Set this to ECENotifications.

    • charging.server.kafkaConfigurations.suspenseTopicName: Set this to ECESuspenseQueue.

  6. Install the ECE cloud native service by entering this command from the helmcharts directory:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

The notification topics are created in Apache Kafka.

Creating an Oracle WebLogic Notification Queue

To create notification queues and topics in Oracle WebLogic:

  1. Ensure the following:

    • Oracle WebLogic is running in your Kubernetes cluster.

    • A separate WebLogic domain for the ECE Notification queues has been created.

      Note:

      Do not create your ECE notification queues in an existing WebLogic domain. For example, do not use the Billing Care, Business Operations Center, PDC, or Billing Care REST API domains.

    • The following third-party libraries are in the 3rdparty_jars directory inside external-pvc:

      • external-pvc: com.oracle.weblogic.beangen.general.api.jar

      • wlthint3client.jar

    • For SSL-enabled WebLogic in a disaster recovery environment, move a common JKS certificate file for all sites to the ece_ssl_keystore directory inside external-pvc.

  2. Create an override-values.yaml file for oc-cn-ece-helm-chart.

  3. Set the following keys in your override-values.yaml file:

    • Set the secretEnv.JMSQUEUEPASSWORD key to the WebLogic user password.

    • If WebLogic SSL is enabled, set the secretEnv.NOTIFYEVENTKEYPASS key to the KeyStore password.

    • Set the job.jmsconfig.runjob key to true.

    • If the job needs to create the ECE JMS module and subdeployment, set the job.jmsconfig.preCreateJmsServerAndModule key to true.

    • Set the charging.server.weblogic.jmsmodule key to ECE.

    • Set the charging.server.weblogic.subdeployment key to ECEQueue.

    • Set the charging.server.kafkaEnabledForNotifications key to false.

    • In the JMSConfiguration section, set the HostName, Port, Protocol, ConnectionURL, and KeyStoreLocation keys to the appropriate values for your system.

    For more information about these keys, see Table 10-1.

  4. Copy the SSL certificate file (client.jks) to the ece_ssl_keystore directory in the external PVC.

  5. Install the ECE cloud native service by entering this command from the helmcharts directory:

    helm install EceReleaseName oc-cn-ece-helm-chart --namespace BrmNameSpace --values OverrideValuesFile

The following are created in the ECE domain of your WebLogic Server:

  • A WebLogic notification topic named NotificationTopic.

  • A WebLogic notification queue named SuspenseQueue.

  • A WebLogic connection factory named NotificationFactory.

Next, configure the connection factory resource so your clients can connect to the ECE notification queues and topics in Oracle WebLogic.

To configure the connection factory resource:

  1. On the WebLogic Server in which the JMS ECE notification queue resides, sign in to WebLogic Server Administration Console.

  2. In the Domain Structure tree, expand Services, expand Messaging, and then click JMS Modules.

    The Summary of JMS Modules page appears.

  3. In the JMS Modules table, click on the name ECE.

    The Settings for ECE page appears.

  4. In the Summary of Resources table, click on the name NotificationFactory.

    The Settings for NotificationFactory page appears.

  5. Click the Configuration tab, and then click the Client tab.

  6. On the Client page, do the following:

    1. In Client ID Policy, select Unrestricted.

    2. In Subscription Sharing Policy, select Sharable.

    3. In Reconnect Policy, select None.

    4. Click Save.

  7. Click the Transactions tab.

  8. On the Transactions page, do the following:

    1. In Transaction Timeout, enter 2147483647 which is the maximum timeout value.

    2. Click Save.

For more information, see Oracle WebLogic Administration Console Online Help.

Configuring ECE for a Multischema BRM Environment

If your BRM database contains multiple schemas, you must configure ECE to connect to each schema.

To configure ECE for a BRM multischema database:

  1. Open your override-values.yaml file for the oc-cn-ece-helm-chart chart.

  2. Specify the password for accessing each schema in the BRM database. To do so, configure these keys for each schema:

    • secretEnv.BRMDATABASEPASSWORD.schema: Set this to the schema number. Enter 1 for the primary schema, 2 for the secondary schema, and so on.

    • secretEnv.BRMDATABASEPASSWORD.PASSWORD: Set this to the schema password.

    This shows example settings for two schemas:

    secretEnv:
       BRMDATABASEPASSWORD:   
          - schema: 1     
            PASSWORD: Password   
          - schema: 2     
            PASSWORD: Password
  3. Configure a customerUpdater pod for each schema. To do so, add a -schemaNumber list for each schema. In the list:

    • Set the SchemaNumber key to 1 for the primary schema, 2 for the secondary schema, and so on.

    • Set the amtAckQueueName key to the fully qualified name of the acknowledgment queue to which the pin_amt utility listens to Account Migration Manager (AMM)-related acknowledgment events. The value is in the format primarySchema.ECE_AMT_ACK_QUEUE, where primarySchema is the name of the primary schema.
    • Set the hostName and jdbcUrl keys to their corresponding values for each schema.

    This shows example settings for two schemas:

    customerUpdater:
       customerUpdaterList:
          - schemaNumber: "1"
            coherenceMemberName: "customerupdater1"
            replicas: 1
            jmxEnabled: true
            coherencePort: ""
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            restartCount: "0"
            oracleQueueConnectionConfiguration:
               name: "customerupdater1"
               gatewayName: "customerupdater1"
               hostName: ""
               port: "1521"
               sid: "pindb"
               userName: "pin"
               jdbcUrl: ""
               queueName: "IFW_SYNC_QUEUE"
               suspenseQueueName: "ECE_SUSPENSE_QUEUE"
               ackQueueName: "ECE_ACK_QUEUE"
               amtAckQueueName: "pin0101.ECE_AMT_ACK_QUEUE"
               batchSize: "1"
               dbTimeout: "900"
               retryCount: "10"
               retryInterval: "60"
               walletLocation: "/home/charging/wallet/ecewallet/"
     
          - schemaNumber: "2"
            coherenceMemberName: "customerupdater2"
            replicas: 1
            jmxEnabled: true
            coherencePort: ""
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            oracleQueueConnectionConfiguration:
               name: "customerupdater2"
               gatewayName: "customerupdater2"
               hostName: ""
               port: "1521"
               sid: "pindb"
               userName: "pin"
               jdbcUrl: ""
               queueName: "IFW_SYNC_QUEUE"
               suspenseQueueName: "ECE_SUSPENSE_QUEUE"
               ackQueueName: "ECE_ACK_QUEUE"
               amtAckQueueName: "pin0101.ECE_AMT_ACK_QUEUE"
               batchSize: "1"
               dbTimeout: "900"
               retryCount: "10"
               retryInterval: "60"
               walletLocation: "/home/charging/wallet/ecewallet/"
  4. Configure a ratedEventFormatter pod for processing rated events belonging to each BRM schema. To do so, add a -schemaNumber list for each schema. In the list, set the schemaNumber and partition keys to 1 for the primary schema, 2 for the secondary schema, and so on.

    This shows example settings for two schemas:

    ratedEventFormatter:
       ratedEventFormatterList:
          - schemaNumber: "1"
            replicas: 1
            coherenceMemberName: "ratedeventformatter1"
            jmxEnabled: true
            coherencePort:
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            restartCount: "0"
            ratedEventFormatterConfiguration:
               name: "ratedeventformatter1"
               primaryInstanceName: "ratedeventformatter1"
               partition: "1"
               noSQLConnectionName: "noSQLConnection"
               connectionName: "oraclePersistence1"
               threadPoolSize: "6"
               retainDuration: "0"
               ripeDuration: "600"
               checkPointInterval: "6"
               maxPersistenceCatchupTime: "60"
               siteName: ""
               pluginPath: "ece-ratedeventformatter.jar"
               pluginType: "oracle.communication.brm.charging.ratedevent.formatterplugin.internal.BrmCdrPluginDirect"
               pluginName: "brmCdrPlugin1"
               noSQLBatchSize: "25"
     
          - schemaNumber: "2"
            replicas: 1
            coherenceMemberName: "ratedeventformatter2"
            jmxEnabled: true
            coherencePort:
            jvmGCOpts: ""
            jvmJMXOpts: ""
            jvmCoherenceOpts: ""
            jvmOpts: ""
            jmxport: ""
            ratedEventFormatterConfiguration:
               name: "ratedeventformatter2"
               primaryInstanceName: "ratedeventformatter2"
               partition: "2"
               noSQLConnectionName: "noSQLConnection"
               connectionName: "oraclePersistence1"
               threadPoolSize: "6"
               retainDuration: "0"
               ripeDuration: "600"
               checkPointInterval: "6"
               maxPersistenceCatchupTime: "60"
               siteName: ""
               pluginPath: "ece-ratedeventformatter.jar"
               pluginType: "oracle.communication.brm.charging.ratedevent.formatterplugin.internal.BrmCdrPluginDirect"
               pluginName: "brmCdrPlugin1"
               noSQLBatchSize: "25"
  5. Save and close your override-values.yaml file for oc-cn-ece-helm-chart.

  6. In the oc-cn-ece-helm-chart/templates/charging-settings.yaml ConfigMap, add poidIdConfiguration in itemAssignmentConfig for each schema.

    This shows example settings for three schemas:

    <itemAssignmentConfigconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.ItemAssignmentConfig" itemAssignmentEnabled="true" delayToleranceIntervalInDays="0" poidPersistenceSafeCount="12000">
       <schemaConfigurationGroup config-class="java.util.ArrayList">                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="1" poidQuantity="2000000">
          </poidIdConfiguration>                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="2" poidQuantity="2000000">
          </poidIdConfiguration>                    
          <poidIdConfigurationconfig-class="oracle.communication.brm.charging.appconfiguration.beans.item.PoidIdConfiguration" schemaName="3" poidQuantity="2000000">
          </poidIdConfiguration>                
       </schemaConfigurationGroup>            
    </itemAssignmentConfig>

After you deploy oc-cn-ece-helm-chart in "Deploying BRM Cloud Native Services", the ECE pods will be connected to your BRM database schemas.