1 Managing Pods and PVCs in BRM Cloud Native
Learn how to manage the pods and PersistentVolumeClaim (PVCs) in your Oracle Communications Billing and Revenue Management (BRM) cloud native environment.
Topics in this document:
Note:
This documentation uses the override-values.yaml file name for ease of use, but you can name the file whatever you want.
Setting up Autoscaling of BRM Pods
You can use the Kubernetes Horizontal Pod Autoscaler to automatically scale up or down the number of BRM pod replicas in your deployment based on a pod's CPU or memory utilization.
For more information about:
-
Kubernetes Horizontal Pod Autoscaler, see "Horizontal Pod Autoscaling" in the Kubernetes documentation
-
Kubernetes requests and limits, see "Resource Management for Pods and Containers" in the Kubernetes documentation
In BRM cloud native deployments, the Horizontal Pod Autoscaler monitors and scales these BRM pods:
- batch-controller
- brm-rest-services-manager
- cm
- dm-eai
- dm-kakfa
- dm-oracle
- realtime-pipe
- rel-daemon
- rated-event-manager
To set up autoscaling for BRM pods:
-
Open your override-values.yaml file for oc-cn-helm-chart.
-
Enable the Horizontal Pod Autoscaler by setting the ocbrm.isHPAEnabled key to true.
-
Specify how often, in seconds, the Horizontal Pod Autoscaler checks a BRM pod's memory usage and scales the number of replicas. To do so, set the ocbrm.refreshInterval key to the number of seconds between each check. For example, set it to 60 for a one-minute interval.
-
For each BRM pod, set these keys to the appropriate values for your system:
-
ocbrm.BRMPod.resources.limits.cpu: Set this to the maximum number of CPU cores the pod can utilize.
-
ocbrm.BRMPod.resources.requests.cpu: Set this to the minimum number of CPU cores required in a Kubernetes node to deploy a pod.
The pod is set to Pending if the minimum CPU amount is unavailable.
Note:
The node must have enough CPUs available for the CPU requests of all containers of the pod. For example, the cm pod would need to have enough CPUs for the cm container, eai_js container, and perflib container (if enabled).
-
ocbrm.BRMPod.resources.limits.memory: Set this to the maximum amount of memory a pod can utilize.
-
ocbrm.BRMPod.resources.requests.memory: Set this to the minimum memory required for a Kubernetes node to deploy a pod.
The pod is set to Pending if the minimum amount is unavailable due to insufficient memory.
-
ocbrm.BRMPod.hpaValues.minReplica: Set this to the minimum number of pod replicas that can be deployed in a cluster.
If a pod's utilization metrics drop below targetCPU or targetMemory, the Horizontal Pod Autoscaler scales down the number of pod replicas to this minimum count. No changes are made if the number of pod replicas is already at the minimum.
-
ocbrm.BRMPod.hpaValues.maxReplica: Set this to the maximum number of pod replicas to deploy when scale up is triggered.
If a pod's metrics utilization goes above targetCPU or targetMemory, the Horizontal Pod Autoscaler scales up the number of pods to this maximum count.
-
ocbrm.BRMPod.hpaValues.targetCpu: Set this to the percentage of requestCpu at which to scale up or down a pod.
If a pod's CPU utilization exceeds targetCpu, the Horizontal Pod Autoscaler increases the pod replica count to maxReplica. If a pod's CPU utilization drops below targetCpu, the Horizontal Pod Autoscaler decreases the pod replica count to minReplica.
-
ocbrm.BRMPod.hpaValues.targetMemory: Set this to the percentage of requestMemory at which to scale up or scale down a pod.
If a pod's memory utilization exceeds targetMemory, the Horizontal Pod Autoscaler increases the pod replica count to maxReplica. If memory utilization drops below targetMemory, the Horizontal Pod Autoscaler decreases the pod replica count to minReplica.
-
-
Save and close your override-values.yaml file.
-
Run the helm upgrade command to update your Helm release:
helm upgrade BrmReleaseName oc-cn-helm-chart --values OverrideValuesFile --namespace BrmNameSpace
where:
-
BrmReleaseName is the release name for oc-cn-helm-chart and is used to track this installation instance.
- OverrideValuesFile is the file name and path to your override-values.yaml file.
-
BrmNameSpace is the namespace in which to create BRM Kubernetes objects for the BRM Helm chart.
-
Automatically Rolling Deployments by Using Annotations
Whenever a ConfigMap entry or a Secret file is modified, you must restart its associated pod. This updates the container's configuration, but the application is notified about the configuration updates only if the pod's deployment specification has changed. Thus, a container could use the new configuration while the application keeps running with its old configuration.
You can configure a pod to automatically notify an application when a container's configuration has changed. To do so, configure a pod to automatically update its deployment specification whenever a ConfigMap or Secret file changes by using the sha256sum function. Add an annotations section similar to this one to the pod's deployment specification:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
For more information, see "Automatically Roll Deployments" in Helm Chart Development Tips and Tricks.
Restarting BRM Pods
You may occasionally need to restart a BRM pod, such as when an error occurs that you cannot fix or a pod is stuck in a terminating status. You restart a BRM pod by deleting it with kubectl.
To restart a BRM pod:
-
Retrieve the names of the BRM pods by entering this command:
kubectl get pods -n NameSpace
where NameSpace is the namespace in which Kubernetes objects for the BRM Helm chart reside.
The following provides sample output:
NAME READY STATUS RESTARTS AGE cm-6f79d95887-lp7qs 1/1 Running 0 6d17h dm-oracle-5496bf8d94-vjgn7 1/1 Running 0 6d17h dm-kafka-d5ccf6dbd-l968b 1/1 Running 0 6d17h
-
Delete a pod by entering this command:
kubectl delete pod PodName -n NameSpace
where PodName is the name of the pod. For example, to delete and restart the cm pod, you would enter:
kubectl delete pod cm-6f79d95887-lp7qs -n NameSpace
Setting Minimum and Maximum CPU and Memory Values
You can specify the minimum and maximum CPU and memory resources BRM cloud native containers can use. Setting minimum values ensures containers can deploy successfully while setting maximum values prevents containers from consuming excessive resources, which could lead to system crashes.
Note:
For a pod to be scheduled on a node, the node must have enough CPUs available for the CPU requests of all containers of the pod. For example, in case of the cm pod, the node would need to have enough CPUs for the cm container, eai_js container, and perflib container (if enabled).
You should also tune the JVM parameter for heap memory when tuning container-level resources for Java-based containers. You do this adjustment through component-level keys.
To set the minimum and maximum amount of CPUs and memory for containers, include the following keys in your override-values.yaml file for oc-cn-helm-chart, oc-cn-init-db-helm-chart, oc-cn-op-job-helm-chart, oc-cn-ece-helm-chart:
componentName: resources: requests: cpu: value memory: value limits: cpu: value memory: value
where:
-
componentName: Specifies the component name in the values.yaml file, such as cm, rel_daemon, and dm_vertex.
-
limits.cpu: Specifies the maximum number of CPU cores the container can utilize, such as 1000m.
-
limits.memory: Specifies the maximum amount of memory a container can utilize, such as 2000Mi.
-
requests.cpu: Specifies the minimum number of CPU cores reserved in a Kubernetes node to deploy a container, such as 50m.
-
requests.memory: Specifies the minimum amount of memory a container can utilize, such as 256Mi.
You must perform a Helm install or Helm upgrade after making any changes.
For more information about requests and limits, see "Resource Management for Pods and Containers" in the Kubernetes documentation.
Using Static Volumes
By default, the BRM cloud native pods use dynamic volume provisioning. However, you can modify one or more pods to use static volumes instead to meet your business requirements. To do so, you add createOption keys to the override-values.yaml file for each pod that you want to use static volumes and then redeploy your Helm charts.
To change a pod to use dynamic volumes, remove the createOption keys from your override-values.yaml file and then redeploy your Helm charts.
To change one or more pods to use static volumes, do the following:
-
Open the override-values.yaml file for the appropriate Helm chart: oc-cn-op-job-helm-chart, oc-cn-helm-chart, and oc-cn-ece-helm-chart.
-
Under the appropriate pod's volume section, update the createOption keys.
For example, to use a hostPath-based volume, you would update the createOption key as shown below:
volume: createOption: hostPath: path: pathOnNode type: Directory
where pathOnNode is the location on the host system of the external PV.
Note:
The batchpipe, rated-event-manager, and rel_daemon pods require a separate volume for each schema in a multischema system. In this case, use pathOnNode/SCHEMA. When you perform a helm upgrade or install, it replaces SCHEMA with the schema number. For example, the Helm chart replaces SCHEMA with 1 for schema 1, 2 for schema 2, and so on.
-
Save and close your override-values.yaml file.
-
Redeploy your Helm charts. For more information, see "Deploying BRM Cloud Native Services" in BRM Cloud Native Deployment Guide.
The following shows sample override-values.yaml keys for changing the brm-sdk, batchpipe, and batch-controller pods to use a static hostPath-based volume:
ocbrm: brm_sdk: volume: storage: 50Mi createOption: hostPath: path: /sample/vol type: Directory batchpipe: volume: output: storage: 100mi createOption: hostPath: path: /sample/vol/out/SCHEMA type: Directory reject: storage: 100mi createOption: hostPath: path: /sample/vol/reject/SCHEMA type: Directory batch-controller: volume: input: storage: 50mi createOption: hostPath: path: /sample/vol/input type: Directory