6 Managing Your Cloud Native Deployment
This chapter describes the tasks you perform to manage your AIA cloud native deployment.
Scaling the AIA Application Cluster
Restarting the AIA Cloud Native Instance
Deleting the AIA Cloud Native Instance
- Get the details of AIA and SOA PV
paths:
$ kubectl describe pv soainfra-domain-pv $ kubectl describe pv aia-comms-shared-pv
- Uninstall the helm
charts:
helm uninstall aia-comms-pv-pvc -n namespace helm uninstall aia-comms-deploy-aiapip -n namespace helm uninstall aia-comms-certs-osm -n namespace helm uninstall aia-comms-certs-siebel -n namespace
- Delete the Kubernetes Network resources, which AIA cloud native creates as part of AIA PV
creation:
- Get the details of the
resources:
$ kubectl get serviceAccount -n namespace | grep cluster-kubectl $ kubectl get clusterrole | grep access-pod-cluster-role $ kubectl get rolebinding -n namespace | grep access-pod-role-binding
- Delete the
resources:
$ kubectl delete serviceAccount service-account-name -n namespace $ kubectl delete clusterrole cluster-role-name $ kubectl delete rolebinding role-binding-name -n namespace
- Get the details of the
resources:
- Uninstall the domain and drop the RCU schema. For instructions, see: https://oracle.github.io/fmw-kubernetes/23.1.2/soa-domains/cleanup-domain-setup/.
- Clean up the persistent volume data. To remove the AIA PV that is generated
during AIA deployment, using appropriate privileges, delete the contents of the storage
attached to the domain home persistent volume manually.
For example, to delete the persistent volume of type host_path, run:
$ rm -rf /export/shared/*
Monitoring the AIA Cloud Native Domain and Publishing Logs
You can monitor your AIA cloud native deployment using Grafana and OpenSearch and publish logs to Elasticsearch and Kibana.
- Deployment of WME war file to Weblogic console. For details, see "https://oracle.github.io/fmw-kubernetes/23.1.2/soa-domains/adminguide/monitoring-soa-domains/#set-up-monitoring."
- Deploying WME as a sidecar. For details, see "https://github.com/oracle/weblogic-monitoring-exporter#use-the-monitoring-exporter-with-weblogic-kubernetes-operator"
Note:
KSS-based keystores supports deploying WME as a sidecar only.Refer to the Oracle Fusion Middleware on Kubernetes documentation for information about using Grafana for monitoring your deployment at: https://oracle.github.io/fmw-kubernetes/23.1.2/soa-domains/adminguide/monitoring-soa-domains/#set-up-monitoring.
For information about using Elasticsearch and Kibana, see the WKO documentation at: https://oracle.github.io/weblogic-kubernetes-operator/4.0/samples/elastic-stack/.
Upgrading Your AIA Cloud Native Deployment
To upgrade your AIA cloud native deployment, perform the procedures described in the Oracle Fusion Middleware on Kubernetes documentation at: https://oracle.github.io/fmw-kubernetes/23.1.2/soa-domains/patch_and_upgrade/.
Troubleshooting Issues
This section describes how to troubleshoot common issues with your AIA cloud native deployment.
Redeploying the AIA PV PVC Helm Chart
You can redeploy the aia-comms-pv-pvc helm chart if the aia-comms-deploy-aiapip helm chart is not installed.
- Kubernetes PV PVC
- Kubernetes job aia-comms-create-home-job to populate AIA PV
- Service Account with name domain_name-cluster-kubectl
- ClusterRole of name domain_name-access-pod-cluster-role
- RoleBinding of name domain_name-access-pod-role-binding
The Kubernetes job aia-comms-create-home-job pods would be in the Pending state till the required PV PVC is created.
- Find the pod name for a given
job:
$ kubectl get pods -n namespace | grep aia-comms-create-home-job
- Get logs of the
pod:
$ kubectl logs job_pod_name -n namespace
- Identify and resolve issues, if any.
- Clean up the AIA Persistent Volume data if there is any. To get
details of storage path,
run:
$ kubectl describe pv AIA_PV_name
To remove the AIA PV that is generated during AIA deployment, using appropriate privileges, delete the contents of the storage attached to the domain home persistent volume manually. For example, to delete the persistent volume of type host_path, run:$ rm -rf /export/shared/*
- Uninstall the helm
chart.
$ helm uninstall aia-comms-pv-pvc -n namespace
- Reinstall the helm chart with updated parameters, if any.
Redeploying the AIA PIPs Helm Chart
- aia-comms-pv-pvc helm chart
- AIA PIP credentials
- BRM JCA Adapter deployment
- Find the pod name for a given
job:
$ kubectl get pods -n namespace | grep aia-comms-deploy-aiapip-job
- Get logs of the
pod:
$ kubectl logs job_pod_name -n namespace
- Identify and resolve issues, if any:
- The logs might contain the "ERROR - Deployment found." error message. This error occurs if reinstallation of the aia-comms-deploy-aiapip helm chart is done over existing PV data.
- The job pod might throw an error before reaching the "Configuring AIAPIPs deployment..." stage. If this happens, do not delete Oracle SOA Suite Domain or redeploy the domain. Resolve the error and perform steps 4 and 7.
- If stage "Configuring AIAPIPs deployment..." or "Deploying AIAPIPs..." is already reached, the RCU schemas might have been updated by the time the error occurred. Therefore, overwriting of the AIA PV and the SOA PV data from the backup is not recommended since it may not solve the issue. Continue with the steps from 4 to 7 to resolve the issue.
- Uninstall the helm
chart:
$ helm uninstall aia-comms-deploy-aiapip -n namespace
- Delete Oracle SOASuite Domain. See "Deleting the AIA Cloud Native Instance". For this case, you do not need to delete namespaces or helm charts for weblogic-operator and load balancer. You also do not need to delete the namespace for Oracle SOASuite Domain.
- Redeploy the domain. See "Deploying AIA Cloud Native".
- Install the aia-comms-deploy-aiapip helm chart.
Oracle SOASuite Domain fails with "folder already exists" error
This error occurs when a domain folder already exists even before deploying Oracle SOA Suite domain. Generally, this error occurs during the redeployment of domain when domain PV is not cleaned up properly.
To resolve this issue, clean up the Oracle SOASuite PV properly and redeploy.
Clean a previous deployment
To clean a previous deployment, perform the steps described in "Deleting the AIA Cloud Native Instance".
AIA Helm chart jobs are in the "imagePullBackOff" state
This is a Kubernetes error which occurs in case the Docker image is present on the worker node. The current version of AIA cloud native toolkit does not include imagePullSecret support in its template yaml files. For this error, for the aia-comms-deploy-aiapip helm chart, you do not need to follow the procedure of redeployment of the AIA helm chart. Once the image is available on a given worker node, pods execution continues.
Table 6-1 Redeployment Options
Option | Redeployment Steps |
---|---|
Manually pull the image on the
worker node using docker pull
image_name |
Once the image is available on a given worker node, pod execution continues on its own. You do not need to uninstall the helm chart. |
Add
imagePullSecret in the templates
yaml files of the respective AIA helm
chart.
|
Uninstall the helm chart and install it again after updating the yaml file. |
Restrict Kubernetes cluster to deploy on certain worker nodes only, where the image is present until the installation of AIA cloud native. | Uninstall the helm chart and install it again after the Kubernetes configuration is done. |