23 Common Configuration and Management Tasks for an Enterprise Deployment
The configuration tasks include a few that are common to all enterprise deployments, such as verifying sizing information, performing backups and recoveries, and so on. Patching an enterprise deployment and cross wiring components are the other common tasks.
This chapter includes the following topics:
- Configuration and Management Tasks for All Enterprise Deployments
Complete these common configuration tasks that apply to any Oracle Fusion Middleware enterprise deployment. These tasks include checking the sizing requirements for the deployment, using the JDBC persistence store for web services, and taking backups of the deployment. - Starting and Stopping Components
You can start and stop the various components (such as WebLogic components, an entire domain, a WebLogic Cluster, and so on) after you have deployed them. - Patching an Enterprise Deployment
You should update the Oracle Identity Governance Kubernetes cluster with bundle patches whenever the patches are released. - Performing Backup and Restore Operations
Ensure that you keep backups outside of the Kubernetes cluster so that they are available even if the cluster has issues. - Performing Maintenance on a Kubernetes Worker Node
If you have to shut down a Kubernetes worker node for maintenance/patching or just to refresh it, you should first gracefully move any running Kubernetes services on that worker node. - Adjusting the Server Pods Liveness Probe
By default, the liveness probe is configured to check liveness every 45 seconds. This configuration may cause requests to be routed to the back-end pods that are no longer available during the outage scenarios. - Considerations for Cross-Component Wiring
Cross-Component Wiring (CCW) enables the FMW components to publish and bind to some of the services available in a WLS domain, by using specific APIs.
Configuration and Management Tasks for All Enterprise Deployments
Complete these common configuration tasks that apply to any Oracle Fusion Middleware enterprise deployment. These tasks include checking the sizing requirements for the deployment, using the JDBC persistence store for web services, and taking backups of the deployment.
Core DNS Allocation
Note:
This step is applicable to any Kubernetes system usingcoredns
.
coredns
pods on the control plane (if possible) and another two on the worker nodes. The coredns
footprint is low. VIRT RES SHR S %CPU %MEM TIME+ COMMAND
146268 41684 29088 S 0.3 0.1 25:44.04 coredns
MB required (default settings) = (Pods + Services) / 1000 + 54
$ kubectl label nodes K8ControlPlane1 area=dnsarea
$ kubectl label nodes K8ControlPlane2 area=dnsarea
$ kubectl label nodes K8ControlPlane3 area=dnsarea
$ kubectl label nodes k8worker1 area=dnsarea
$ kubectl label nodes k8worker2 area=dnsarea
$ kubectl label nodes k8worker3 area=dnsarea
coredns
deployment to use topology spread
constraints.
Note:
Topology spread constraints is beta starting in Kubernetes v1.18.First, enable the feature gate in kube-api
server and in
kube-scheduler
. Then, modify the coredns
deployment for an
appropriate spread of pods across the worker and the control plane nodes.
The coredns
topology spread configuration details are:
The following is a sample of the coredns
deployment yaml file:
$ kubectl get deployment coredns -n kube-system -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "7"
creationTimestamp: "2021-01-15T13:15:05Z"
generation: 8
labels:
area: dnsarea
k8s-app: kube-dns
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:k8s-app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:k8s-app: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:k8s-app: {}
f:spec:
f:containers:
k:{"name":"coredns"}:
.: {}
f:args: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":53,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
k:{"containerPort":53,"protocol":"UDP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
k:{"containerPort":9153,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources:
.: {}
f:limits:
.: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
f:securityContext:
.: {}
f:allowPrivilegeEscalation: {}
f:capabilities:
.: {}
f:add: {}
f:drop: {}
f:readOnlyRootFilesystem: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/etc/coredns"}:
.: {}
f:mountPath: {}
f:name: {}
f:readOnly: {}
f:dnsPolicy: {}
f:nodeSelector:
.: {}
f:kubernetes.io/os: {}
f:priorityClassName: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
f:tolerations: {}
f:volumes:
.: {}
k:{"name":"config-volume"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:items: {}
f:name: {}
f:name: {}
manager: kubeadm
operation: Update
time: "2021-01-15T13:15:05Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:area: {}
f:spec:
f:replicas: {}
f:template:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/restartedAt: {}
f:labels:
f:foo: {}
f:spec:
f:topologySpreadConstraints:
.: {}
k:{"topologyKey":"area","whenUnsatisfiable":"DoNotSchedule"}:
.: {}
f:labelSelector:
.: {}
f:matchLabels:
.: {}
f:foo: {}
f:maxSkew: {}
f:topologyKey: {}
f:whenUnsatisfiable: {}
manager: kubectl
operation: Update
time: "2021-01-28T16:00:21Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-01-28T16:00:39Z"
name: coredns
namespace: kube-system
resourceVersion: "2520507"
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/coredns
uid: 79d24e61-98f4-434f-b682-132625b04c49
spec:
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2021-01-28T15:29:48Z"
creationTimestamp: null
labels:
foo: bar
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: k8s.gcr.io/coredns:1.6.7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
topologySpreadConstraints:
- labelSelector:
matchLabels:
foo: bar
maxSkew: 1
topologyKey: area
whenUnsatisfiable: DoNotSchedule
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
status:
availableReplicas: 4
conditions:
- lastTransitionTime: "2021-01-21T19:08:12Z"
lastUpdateTime: "2021-01-21T19:08:12Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-01-28T15:29:48Z"
lastUpdateTime: "2021-01-28T16:00:39Z"
message: ReplicaSet "coredns-84b49c57fd" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 8
readyReplicas: 4
replicas: 4
updatedReplicas: 4
labels:
foo: bar
k8s-app: kube-dns
topologySpreadConstraints:
- labelSelector:
matchLabels:
foo: bar
maxSkew: 1
topologyKey: area
whenUnsatisfiable: DoNotSchedule
This guarantees an even distribution across the master and worker nodes. Therefore, if the control plane is restored, the worker pods will continue without issues and the other way around.
coredns
distribution:kubectl get pods -A -o wide | grep coredns
kube-system coredns-84b49c57fd-4fz4g 1/1 Running 0 166m 10.244.1.20 K8ControlPlane2 <none> <none>
kube-system coredns-84b49c57fd-5mrkw 1/1 Running 0 165m 10.244.4.76 K8Worker2 <none> <none>
kube-system coredns-84b49c57fd-5zm88 1/1 Running 0 165m 10.244.2.17 K8ControlPlane3 <none> <none>
kube-system coredns-84b49c57fd-nqlwb 1/1 Running 0 166m 10.244.4.75 K8Worker2 <none> <none>
Verifying Appropriate Sizing and Configuration for the WLSSchemaDataSource
WLSSchemaDataSource
is the common data source that is
reserved for use by the FMW components for JMS JDBC Stores, JTA JDBC stores, and Leasing
services. WLSSchemaDataSource
is used to avoid contention in critical
WLS infrastructure services and to guard against dead-locks.
To reduce the WLSSchemaDataSource
connection usage, you can change the JMS JDBC and TLOG JDBC stores connection caching policy from Default to Minimal by using the respective connection caching policy settings. When there is a need to reduce connections in the back-end database system, Oracle recommends that you set the caching policy to Minimal . Avoid using the caching policy None because it causes a potential degradation in performance. For a detailed tuning advice about connections that are used by JDBC stores, see Configuring a JDBC Store Connection Caching Policy in Administering the WebLogic Persistent Store.
The default WLSSchemaDataSource
connection pool size is 75 (size is
double in the case of a GridLink Data Source). You can tune this size to a higher value
depending on the size of the different FMW clusters and the candidates that are
configured for migration. For example, consider a typical SOA EDG deployment with the
default number of worker threads per store. If more than 25 JDBC Stores or TLOG-in-DB
instances or both can fail over to the same Weblogic server, and the Connection Caching
Policy is not changed from Default to Minimal, possible connection
contention issues could arise. In these cases, increasing the default
WLSSchemaDataSource
pool size (maximum capacity) becomes necessary
(each JMS store uses a minimum of two connections, and leasing and JTA are also added to
compete for the pool).
About JDBC Persistent Stores for Web Services
By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services.
-
Reliable Messaging
-
Make Connection
-
SecureConversation
-
Message buffering
You also have the option to use a JDBC persistence store in your WebLogic Server web service, instead of the default store. For information about web service persistence, see Managing Web Service Persistence.
Enabling Autoscaling
Kubernetes allows pods to be auto-scaled. If there are two pods running and your resource exceeds a threshold, such as memory, a new pod is automatically started.
For example, when you achieve 75% of available memory, a new pod is started on a different worker node.
For acheiving autoscaling, you should first define the resource requirements for each pod type you want to autoscale. See the relevant product's Installing and Configuring chapter for details.
Deploying the Kubernetes Metric Server
- To deploy this server, run the following
command:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
- Confirm that the metric server is running by using the following
command:
kubectl get pods -n kube-system
Parent topic: Enabling Autoscaling
Deploying the Kubernetes HorizontalPodAutoscaler Resource
The following example shows how to automatically scale a WebLogic cluster based on memory and CPU utilization.
Assuming that you have an OAM cluster running in the oamns
namespace, use the following command to create a HorizontalPodAutoscaler (HPA)
resource targeted at the cluster resource
(accessdomain-oam-cluster
) that will autoscale Oracle WebLogic
Server instances from a minimum of two cluster member to a five cluster member. The
scale up or down will occur when the average CPU utilization is consistently over
70%.
Parent topic: Enabling Autoscaling
Performing Backups and Recoveries for an Enterprise Deployment
Oracle recommends you to follow the below mentioned guidelines to ensure that you back up the necessary directories and configuration data for an Oracle Identity and Access Management enterprise deployment.
Note:
Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.
For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:
Table 23-1 lists the static artifacts to back up in a typical Oracle Identity and Access Management enterprise deployment.
Table 23-1 Static Artifacts to Back Up in the Oracle Identity and Access Management Enterprise Deployment
Type | Host | Tier |
---|---|---|
Database Oracle home |
DBHOST1 and DBHOST2 |
Data Tier |
Oracle Fusion Middleware Oracle home |
WEBHOST1 and WEBHOST2 |
Web Tier |
Oracle Fusion Middleware Oracle home |
OIMHOST1 and OIMHOST2 (or NAS Filer) |
Application Tier |
Installation-related files |
WEBHOST1, WEHOST2, and shared storage |
N/A |
Table 23-2 lists the runtime artifacts to back up in a typical Oracle Identity and Access Management enterprise deployment.
Table 23-2 Run-Time Artifacts to Back Up in the Oracle Identity and Access Management Enterprise Deployment
Type | Host | Tier |
---|---|---|
Administration Server domain home (ASERVER_HOME) |
OIMHOST1 (or NAS Filer) |
Application Tier |
Application home (APPLICATION_HOME) |
OIMHOST1 (or NAS Filer) |
Application Tier |
Oracle RAC databases |
DBHOST1 and DBHOST2 |
Data Tier |
Scripts and Customizations |
Per host |
Application Tier |
Deployment Plan home (DEPLOY_PLAN_HOME) |
OIMHOST1 (or NAS Filer) |
Application Tier |
OHS Configuration directory |
WEBHOST1 and WEBHOST2 |
Web Tier |
Starting and Stopping Components
You can start and stop the various components (such as WebLogic components, an entire domain, a WebLogic Cluster, and so on) after you have deployed them.
Use the following procedures to start and stop different components:
Starting and Stopping the Oracle Unified Directory
Oracle Unified Directory (OUD) is stopped and started using the
helm
command.
helm upgrade -n <OUDNS> --set replicaCount=0 <OUD_PREFIX> /home/opc/workdir/OUD/samples/kubernetes/helm/oud-ds-rs --reuse-values
helm upgrade -n oudns --set replicaCount=0 edg /workdir/OUD/samples/kubernetes/helm/oud-ds-rs --reuse-values
helm upgrade -n <OUDNS> --set replicaCount=<NO_SERVERS> <OUD_PREFIX> /home/opc/workdir/OUD/samples/kubernetes/helm/oud-ds-rs --reuse-values
helm upgrade -n oudns --set replicaCount=2 edg /workdir/OUD/samples/kubernetes/helm/oud-ds-rs --reuse-values
Parent topic: Starting and Stopping Components
Starting and Stopping OAM and OIG
You cannot start and stop the WebLogic components by using the WebLogic cluster. You must use the following procedures:
Note:
There are shell scripts that come with sample scripts that start and stop operations. These scripts are located in the downloads directory.fmw-kubernetes/<PRODUCT>/kubernetes/domain-lifecycle/
- Starting and Stopping an Entire Domain
- Starting and Stopping a WebLogic Cluster
- Starting and Stopping the Managed Server and Administration Server
Parent topic: Starting and Stopping Components
Starting and Stopping an Entire Domain
startDomain.sh -d <DOMAIN_NAME> -n <NAMESPACE>
stopDomain.sh -d <DOMAIN_NAME> -n <NAMESPACE>
Parent topic: Starting and Stopping OAM and OIG
Starting and Stopping a WebLogic Cluster
startCluster.sh -d <DOMAIN_NAME> -n <NAMESPACE> -c <CLUSTER_NAME>
stopCluster.sh -d <DOMAIN_NAME> -n <NAMESPACE> -c <CLUSTER_NAME>
./rollCluster.sh -d <DOMAIN_NAME> -n <NAMESPACE> -c <CLUSTER_NAME>
Parent topic: Starting and Stopping OAM and OIG
Starting and Stopping the Managed Server and Administration Server
startServer.sh -d <DOMAIN_NAME> -n <NAMESPACE> -s <SERVER_NAME> -k <REPLICAS>
stopServer.sh -d <DOMAIN_NAME> -n <NAMESPACE> -s <SERVER_NAME> -k <REPLICAS>
restartServer.sh d <DOMAIN_NAME> -n <NAMESPACE> -s <SERVER_NAME>
Parent topic: Starting and Stopping OAM and OIG
Patching an Enterprise Deployment
You should update the Oracle Identity Governance Kubernetes cluster with bundle patches whenever the patches are released.
Applying Bundle Patches to Helm Based Deployments
helm upgrade --reuse-values --set image.tag=<NEW_IMAGE> --namespace <NAMESPACE> --wait <DEPLOYMENT> <CHART>
For example:
helm upgrade \
--reuse-values \
--set image.tag=3.3.0 \
--namespace opns \
--wait \
weblogic-operator \
/workdir/samples/charts/weblogic-operator/
helm upgrade --reuse-values --set image.tag=12.2.1.4.0-8-ol7-211013.1053 --namespace oudns edg ../workdir/OUD/samples/kubernetes/helm/oud-ds-rs
This command updates the tag rather than the full image name.
Note:
These commands do not automatically restart the pods. You can restart the pods on a rolling basis by using the following command:kubectl get pod <podname> -n <namespace> -o yaml | kubectl replace --force -f -
Ensure that you restart each pod before moving on to the next.
kubectl describe pod <podname> -n <namespace>
Parent topic: Patching an Enterprise Deployment
Applying Bundle Patches to a WebLogic Domain
- Run the
kubectl edit domain
command. - Run the
kubectl patch domain
command
- Restarting the Helper Pod
- Using the kubectl edit domain Command
- Using the kubectl patch domain Command
Parent topic: Patching an Enterprise Deployment
Restarting the Helper Pod
If you have a running helper pod, delete the pod and restart it using the image to which you are patching.
Parent topic: Applying Bundle Patches to a WebLogic Domain
Using the kubectl edit domain Command
kubectl edit domain
command:
Parent topic: Applying Bundle Patches to a WebLogic Domain
Using the kubectl patch domain Command
kubectl patch domain
command, run the
following:$ kubectl patch domain <domain> -n <namespace> --type merge -p '{"spec":{"image":"newimage:tag"}}'
$ kubectl patch domain governancedomain -n oigns --type merge -p '{"spec":{"image":"oracle/oig:12.2.1.4.0-8-ol7-210525.2125"}}'
domain.weblogic.oracle/oimcluster patched
Parent topic: Applying Bundle Patches to a WebLogic Domain
Patching Oracle Identity Governance
- Checks if helper pod exists in the given namespace. If yes, it deletes the helper pod.
- Brings up a new helper pod with new image.
- Stops the Administration Server and the SOA and OIM servers using
serverStartPolicy
set asNEVER
in the domain definition yaml file. - Waits for all servers to be stopped (default timeout 2000s).
- Introspects the DB properties including the credentials from the job configmap.
- Performs DB schema changes from helper pod.
- Starts the Administration Server and the SOA and OIM server by setting
serverStartPolicy
toIF_NEEDED
and the image to the new image tag. - Waits for all servers to be ready (default timeout 2000s).
The script exits nonzero if a configurable timeout is reached before the target pod count is reached, depending upon the domain configuration. It also exits nonzero if there is any failure while patching the DB schema and domain.
Note:
The script execution causes a downtime while patching the OIG deployment and database schemas.Parent topic: Patching an Enterprise Deployment
Prerequisites Before Patching
- Review the Manage Domains documentation.
- Have a running OIG deployment in your cluster.
- Have a database that is up and running.
Parent topic: Patching Oracle Identity Governance
Running the Patch Domain Script
To run the patch domain script, specify your inputs needed by the script.
$ cd $WORKDIR/samples/domain-lifecycle
$ ./patch_oig_domain.sh -h
$ ./patch_oig_domain.sh -i <target_image_tag> -n <OIGNS>
$ cd /workdir/OIG/samples/domain-lifecycle
$ ./patch_oig_domain.sh -h
$ ./patch_oig_domain.sh -i 12.2.1.4.0-8-ol7-210721.0748 -n oigns
[INFO] Found domain name: governancedomain [INFO] Image Registry: container-registry.oracle.com/middleware/oig_cpu [INFO] Domain governancedomain is currently running with image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-220120.1359 current no of pods under governancedomain are 3 [INFO] The pod helper already exists in namespace oigns. [INFO] Deleting pod helper pod "helper" deleted [INFO] Fetched Image Pull Secret: orclcred [INFO] Creating new helper pod with image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-220223.2107 pod/helper created Checking helper Running [INFO] Stopping Admin, SOA and OIM servers in domain governancedomain. This may take some time, monitor log /scratch/oig_post_patch/log/oim_patch_log-2022-09-06_09-43-15/stop_servers.log for details [INFO] All servers are now stopped successfully. Proceeding with DB Schema changes [INFO] Patching OIM schemas... [INFO] DB schema update successful. Check log /scratch/oig_post_patch/log/oim_patch_log-2022-09-06_09-43-15/patch_oim_wls.log for details [INFO] Starting Admin, SOA and OIM servers with new image container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-220223.2107 [INFO] Waiting for weblogic pods to be ready..This may take several minutes, do not close the window. Check log /scratch/oig_post_patch/log/oim_patch_log-2022-09-06_09-43-15/monitor_weblogic_pods.log for progress [SUCCESS] All servers under governancedomain are now in ready state with new image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-220223.2107
Logs are available at
$WORKDIR/samples/domain-lifecycle
by default. You can also
provide a custom log location to the script.
If the patch domain script creation fails, see Domain Patching Failure.
Parent topic: Patching Oracle Identity Governance
Patching Oracle Identity Role Intelligence
- Restart the Oracle Identity Role Intelligence (OIRI) CLI with the new image.
- Restart the Data Ingester CLI with the new image.
- Use the new CLI to upgrade the OIRI deployment to the new image.
Before you start the steps below, ensure that you have the latest container image in your container registry or available locally on each worker node.
- Delete the CLI by using the following
command:
kubectl delete -f oiri-cli.yaml
- Edit the
oiri-cli.yaml
file and change the image: value to have the new image tag.kubectl delete -f oiri-cli.yaml
- Recreate the CLI by using the following
command:
kubectl create -f oiri-cli.yaml
You should see the CLI created by using the following command:kubectl get pods -n <OIRINS>
- Delete the DING CLI by using the following
command:
kubectl delete -f ding-cli.yaml
- Edit the
dingi-cli.yaml
and change the image: value to have the new image tag. - Recreate the DING CLI by using the following
command:
kubectl create -f ding-cli.yaml
You should see the CLI created by using the following command:kubectl get pods -n <DINGNS>
kubectl exec -n <OIRINS> -ti oiri-cli
/updateValuesYaml.sh \
--oiriapiimage oiri:<NEW_IMAGE_VER> \
--oiriuiimage oiri-ui:<NEW_IMAGE_VER> \
--dingimage oiri-ding:<NEW_IMAGE_VER>
./updateConfig.sh \
--dingimage oiri-ding:<NEW_IMAGE_VER>
helm upgrade oiri /helm/oiri -f /app/k8s/values.yaml -n <OIRINS>
Parent topic: Patching an Enterprise Deployment
Patching Oracle Advanced Authentication
Perform the following steps to patch Oracle Advanced Authentication:
Parent topic: Patching an Enterprise Deployment
Applying One-Off/Interim Patches
If you need to apply one-off patches, you have to create your own image with those patches applied. This section provides instructions to building an OIG image with the WebLogic Image Tool.
- Prerequisites for Buiding an OIG Image
- Downloading and Setting Up the WebLogic Image Tool
- Downloading the Packages/Installers and Patches
- Downloading the Required Build Files
- Creating the Image
- Generating the Sample Dockerfile
Parent topic: Patching an Enterprise Deployment
Prerequisites for Buiding an OIG Image
The following prerequisites are necessary before building OIG images with the WebLogic Image Tool:
- A working installation of Docker 18.03.1 or later.
- Bash version 4.0 or later, to enable the command complete feature.
- The JAVA_HOME environment variable
set to the location of your JDK. For example:
/u01/oracle/products/jdk
.
Parent topic: Applying One-Off/Interim Patches
Downloading and Setting Up the WebLogic Image Tool
Download the latest version of the WebLogic Image Tool from the release page and complete the following steps:
Parent topic: Applying One-Off/Interim Patches
Downloading the Packages/Installers and Patches
Download the required installers from the Oracle Software Delivery Cloud and save them
in a directory of your choice. For example: <work
directory>/stage
:
- Oracle Identity and Access Management 12.2.1.4.0
- Oracle Fusion Middleware 12c Infrastructure 12.2.1.4.0
- Oracle SOA Suite 12.2.1.4.0
- Oracle Service Bus 12.2.1.4.0
- Oracle JDK
Note:
If the image is required to have patches included, download patches from My Oracle Support and copy to<work directory>/stage
.
Parent topic: Applying One-Off/Interim Patches
Downloading the Required Build Files
The OIG image requires additional files for creating the OIG domain and starting the WebLogic Servers.
Parent topic: Applying One-Off/Interim Patches
Creating the Image
Navigate to the imagetool/bin
directory and run the following
commands. In the examples below, substitute <work
directory>/stage
for the directory where the appropriate files
reside.
Parent topic: Applying One-Off/Interim Patches
Generating the Sample Dockerfile
If you want to review a sample dockerfile created with the imagetool, use the
imagetool
command with the --dryRun
option:
./imagetool.sh @<work directory/build/buildArgs --dryRun
Parent topic: Applying One-Off/Interim Patches
Performing Backup and Restore Operations
Ensure that you keep backups outside of the Kubernetes cluster so that they are available even if the cluster has issues.
A Kubernetes deployment consists of four parts:
Kubernetes Objects
The simplest way to backup and restore the Kubernetes objects is by using the Kubernetes Snapshot tool. This tool unloads all the Kubernetes objects into a namespace as a series of files, which can then be loaded onto another cluster if needed.
For instructions to use the Kubernetes Snapshot tool, see Backing Up and Restoring the Kubernetes Objects.
Parent topic: Performing Backup and Restore Operations
Container Images
If the container images are stored in a container registry, any cluster you use can easily access and use the required images. However, if you want to restore the images to a new Kubernetes worker node, ensure that the necessary Kubernetes images are available on that worker node.
Parent topic: Performing Backup and Restore Operations
Persistent Volumes
Persistent volumes are essentially directories on a disk, and they are typically stored on NFS storage. To take a backup of a persistent volume, first, mount the directory on a host that has access to the storage, and then use your preferred backup tool to back it up. There are several ways to back up data, such as using hardware snapshots, tar, and rsync. See the relevant documentation on the usage of the backup tool that you choose to use.
Parent topic: Performing Backup and Restore Operations
Database
The Oracle database can either be data guarded (see Creating a Backup/Restore Job) or a database backup performed either to disk, tape, or a backup appliance. Oracle recommends the use of RMAN for database backups. For information about the RMAN command, see the Backup and Recovery Reference documentation for the Oracle Database release 21c.
Parent topic: Performing Backup and Restore Operations
Performing Maintenance on a Kubernetes Worker Node
If you have to shut down a Kubernetes worker node for maintenance/patching or just to refresh it, you should first gracefully move any running Kubernetes services on that worker node.
kubectl drain <node name>
kubectl uncordon <node name>
For more information about draining, see Safely Drain a Node.
Adjusting the Server Pods Liveness Probe
To configure a more aggressive probe, edit the domain and change the
serverPods.livenessProbe
values to the following:
livenessProbe:
failureThreshold: 1
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
domain.yaml
file has an entry similar to the
following: - clusterName: oam_cluster
serverPod:
livenessProbe:
failureThreshold: 1
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
Considerations for Cross-Component Wiring
Cross-Component Wiring (CCW) enables the FMW components to publish and bind to some of the services available in a WLS domain, by using specific APIs.
CCW performs a bind of the wiring information only during the Configuration Wizard session or when manually forced by the WLS domain Administrator. When you add a Weblogic Server to a cluster (in a scale out and scale up operation in a static or dynamic cluster), although the new server publishes its services, all the clients that use the service are not automatically updated and bound to the new service provider. The update does not happen because the existing servers that are already bound to a CCW table, do not automatically know about the new member that joins the cluster. It is the same case with ESS and WSMPM when they provide their services to SOA: both publish their service to the service table dynamically, but SOA servers do not know about these updates unless a bind is forced again.
Note:
-
Wiring Components to Work Together in Administering Oracle Fusion Middleware.
-
Oracle-Developed Modules for Oracle HTTP Server in Administering Oracle HTTP Server
Cross-Component Wiring for WSMPM and ESS
The cross-component wiring t3 information is used by WSMPM and ESS to obtain the list of severs to be used in a JNDI invocation URL.
The CCW t3 information limits the impact of the lack of dynamic updates. When the invocation is done, the JNDI URL is used to obtain the RMI stubs with the list of members in the cluster. The JNDI URL does not need to contain the entire list of servers. The RMI stubs contain the list of all the servers in the cluster at any given time, and are used to load balance requests across all of them. Therefore, without a bind, the servers that are added to the cluster are used even if not present in the bind URL. The only drawback is that at least one of the original servers provided in the first CCW bind must be up to keep the system working when the cluster expands or shrinks. To avoid this issue, you can use the cluster name syntax in the service table instead of using the static list of members.
cluster:t3://cluster_name
When you use cluster:t3://cluster_name
, the CCW invocation fetches the complete list of members in the cluster at any given time, thus avoiding any dependencies on the initial servers and accounting for every member that is alive in the cluster then.
Parent topic: Considerations for Cross-Component Wiring
Using the cluster_name Syntax with WSMPM
This procedure makes WSMPM use a t3 syntax that accounts for servers being added or removed from the WSMPM cluster without having to reupdate the CCW information.
The CCW t3 information is configured to use the cluster syntax by default. You only need to verify that the cluster syntax is used and edit, if required.
- Sign-in to the Fusion Middleware Control by using the administrator's account. For example:
weblogic_iam
. - From the WebLogic Domain drop-down menu, select Cross component Wiring- Service Tables.
- Select the OWSM Policy Manager urn:oracle:fmw.owsm-pm:t3 row.
- Verify that the cluster syntax is used. If not, click Edit and update the t3 and t3s values with the cluster name syntax.
- Click OK.
- From the WebLogic Domain drop-down menu, select Cross component Wiring - Components.
- Select OWSM Agent.
- In the Client Configuration section, select the owsm-pm-connection-t3 row and click Bind.
- Click OK.
Note:
The wiring table is updated with each cluster scale out or scale up, but it does not replace the cluster syntax until a manual rebind is used. Hence, it withstands all updates (additions and removals) in the lifecycle of the cluster.
Parent topic: Considerations for Cross-Component Wiring