7.1.9 Verifying the OIG WLST Deployment
Verifying the Domain, Pods and Services
Verify the domain, servers pods, and services are created, and are in the
READY
state with a status of 1/1
, by running the
following
command:kubectl get all,domains -n <domain_namespace>
For
example:kubectl get all,domains -n oigns
The output will look similar to the
following:NAME READY STATUS RESTARTS AGE
pod/governancedomain-adminserver 1/1 Running 0 19m30s
pod/governancedomain-create-fmw-infra-sample-domain-job-8cww8 0/1 Completed 0 47m
pod/governancedomain-oim-server1 1/1 Running 0 16m25s
pod/governancedomain-soa-server1 1/1 Running 0 16m
pod/helper 1/1 Running 0 3h50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/governancedomain-adminserver ClusterIP None <none> 7001/TCP 28m
service/governancedomain-cluster-oim-cluster ClusterIP 10.106.198.40 <none> 14002/TCP,14000/TCP 25m
service/governancedomain-cluster-soa-cluster ClusterIP 10.102.218.11 <none> 7003/TCP 25m
service/governancedomain-oim-server1 ClusterIP None <none> 14002/TCP,14000/TCP 16m24s
service/governancedomain-oim-server2 ClusterIP 10.97.32.112 <none> 14002/TCP,14000/TCP 25m
service/governancedomain-oim-server3 ClusterIP 10.100.233.109 <none> 14002/TCP,14000/TCP 25m
service/governancedomain-oim-server4 ClusterIP 10.96.154.17 <none> 14002/TCP,14000/TCP 25m
service/governancedomain-oim-server5 ClusterIP 10.103.222.213 <none> 14002/TCP,14000/TCP 25m
service/governancedomain-soa-server1 ClusterIP None <none> 7003/TCP 25m
service/governancedomain-soa-server2 ClusterIP 10.104.43.118 <none> 7003/TCP 25m
service/governancedomain-soa-server3 ClusterIP 10.110.180.120 <none> 7003/TCP 25m
service/governancedomain-soa-server4 ClusterIP 10.99.161.73 <none> 7003/TCP 25m
service/governancedomain-soa-server5 ClusterIP 10.97.67.196 <none> 7003/TCP 25m
NAME COMPLETIONS DURATION AGE
job.batch/governancedomain-create-fmw-infra-sample-domain-job 1/1 3m6s 125m
NAME AGE
domain.weblogic.oracle/governancedomain 24m
NAME AGE
cluster.weblogic.oracle/governancedomain-oim-cluster 23m
cluster.weblogic.oracle/governancedomain-soa-cluster 23m
The default domain created by the script has the following characteristics:
- An Administration Server named
AdminServer
listening on port7001
. - A configured OIG cluster named
oim_cluster
of size5
. - A configured SOA cluster named
soa_cluster
of size5
. - One started OIG managed server, named
oim_server1
, listening on port14000
. - One started SOA managed server named
soa_server1
, listening on port7003
. - Log files that are located in
<persistent_volume>/logs/<domainUID>
.
Verifying the Domain
Run the following command to describe the
domain:
kubectl describe domain <domain_uid> -n <domain_namespace>
For
example:kubectl describe domain governancedomain -n oigns
The
output will look similar to the
following:Name: governancedomain
Namespace: oigns
Labels: weblogic.domainUID=governancedomain
Annotations: <none>
API Version: weblogic.oracle/v9
Kind: Domain
Metadata:
Creation Timestamp: <DATE>
Generation: 1
Managed Fields:
API Version: weblogic.oracle/v9
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:weblogic.domainUID:
f:spec:
.:
f:adminServer:
.:
f:adminChannelPortForwardingEnabled:
f:serverPod:
.:
f:env:
f:serverStartPolicy:
f:clusters:
f:dataHome:
f:domainHome:
f:domainHomeSourceType:
f:failureRetryIntervalSeconds:
f:failureRetryLimitMinutes:
f:httpAccessLogInLogHome:
f:image:
f:imagePullPolicy:
f:imagePullSecrets:
f:includeServerOutInPodLog:
f:logHome:
f:logHomeEnabled:
f:logHomeLayout:
f:maxClusterConcurrentShutdown:
f:maxClusterConcurrentStartup:
f:maxClusterUnavailable:
f:replicas:
f:serverPod:
.:
f:env:
f:volumeMounts:
f:volumes:
f:serverStartPolicy:
f:webLogicCredentialsSecret:
.:
f:name:
Manager: kubectl-client-side-apply
Operation: Update
Time: <DATE>
API Version: weblogic.oracle/v9
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusters:
f:conditions:
f:observedGeneration:
f:servers:
f:startTime:
Manager: Kubernetes Java Client
Operation: Update
Subresource: status
Time: <DATE>
Resource Version: 1247307
UID: 4933be73-df97-493f-a20c-bf1e24f6b3f2
Spec:
Admin Server:
Admin Channel Port Forwarding Enabled: true
Server Pod:
Env:
Name: USER_MEM_ARGS
Value: -Djava.security.egd=file:/dev/./urandom -Xms512m -Xmx1024m
Server Start Policy: IfNeeded
Clusters:
Name: governancedomain-oim-cluster
Name: governancedomain-soa-cluster
Data Home:
Domain Home: /u01/oracle/user_projects/domains/governancedomain
Domain Home Source Type: PersistentVolume
Failure Retry Interval Seconds: 120
Failure Retry Limit Minutes: 1440
Http Access Log In Log Home: true
Image: container-registry.oracle.com/middleware/oig_cpu:14.1.2.1.0-jdk17-ol8-<YYMMDD>
Image Pull Policy: IfNotPresent
Image Pull Secrets:
Name: orclcred
Include Server Out In Pod Log: true
Log Home: /u01/oracle/user_projects/domains/logs/governancedomain
Log Home Enabled: true
Log Home Layout: ByServers
Max Cluster Concurrent Shutdown: 1
Max Cluster Concurrent Startup: 0
Max Cluster Unavailable: 1
Replicas: 1
Server Pod:
Env:
Name: JAVA_OPTIONS
Value: -Dweblogic.StdoutDebugEnabled=false
Name: USER_MEM_ARGS
Value: -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m
Volume Mounts:
Mount Path: /u01/oracle/user_projects/domains
Name: weblogic-domain-storage-volume
Volumes:
Name: weblogic-domain-storage-volume
Persistent Volume Claim:
Claim Name: governancedomain-domain-pvc
Server Start Policy: IfNeeded
Web Logic Credentials Secret:
Name: oig-domain-credentials
Status:
Clusters:
Cluster Name: oim_cluster
Conditions:
Last Transition Time: <DATE>
Status: True
Type: Available
Last Transition Time: <DATE>
Status: True
Type: Completed
Label Selector: weblogic.domainUID=governancedomain,weblogic.clusterName=oim_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Observed Generation: 2
Ready Replicas: 1
Replicas: 1
Replicas Goal: 1
Cluster Name: soa_cluster
Conditions:
Last Transition Time: <DATE>
Status: True
Type: Available
Last Transition Time: <DATE>
Status: True
Type: Completed
Label Selector: weblogic.domainUID=governancedomain,weblogic.clusterName=soa_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Observed Generation: 1
Ready Replicas: 1
Replicas: 1
Replicas Goal: 1
Conditions:
Last Transition Time: <DATE>
Status: True
Type: Available
Last Transition Time: <DATE>
Status: True
Type: Completed
Observed Generation: 1
Servers:
Health:
Activation Time: <DATE>
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: worker-node2
Pod Phase: Running
Pod Ready: True
Server Name: AdminServer
State: RUNNING
State Goal: RUNNING
Cluster Name: oim_cluster
Health:
Activation Time: <DATE>
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: worker-node1
Pod Phase: Running
Pod Ready: True
Server Name: oim_server1
State: RUNNING
State Goal: RUNNING
Cluster Name: oim_cluster
Server Name: oim_server2
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: oim_cluster
Server Name: oim_server3
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: oim_cluster
Server Name: oim_server4
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: oim_cluster
Server Name: oim_server5
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: soa_cluster
Health:
Activation Time: <DATE>
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: worker-node1
Pod Phase: Running
Pod Ready: True
Server Name: soa_server1
State: RUNNING
State Goal: RUNNING
Cluster Name: soa_cluster
Server Name: soa_server2
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: soa_cluster
Server Name: soa_server3
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: soa_cluster
Server Name: soa_server4
State: SHUTDOWN
State Goal: SHUTDOWN
Cluster Name: soa_cluster
Server Name: soa_server5
State: SHUTDOWN
State Goal: SHUTDOWN
Start Time: <DATE>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 35m weblogic.operator Domain governancedomain was created.
Normal Changed 34m (x1127 over 35m) weblogic.operator Domain governancedomain was changed.
Warning Failed 34m (x227 over 35m) weblogic.operator Domain governancedomain failed due to 'Domain validation error': Cluster resource 'governancedomain-oim-cluster' not found in namespace 'oigns'
Cluster resource 'governancedomain-soa-cluster' not found in namespace 'oigns'. Update the domain resource to correct the validation error.
Warning Unavailable 17m weblogic.operator Domain governancedomain is unavailable: an insufficient number of its servers that are expected to be running are ready.";
Warning Incomplete 17m weblogic.operator Domain governancedomain is incomplete for one or more of the following reasons: there are failures detected, there are pending server shutdowns, or not all servers expected to be running are ready and at their target image, auxiliary images, restart version, and introspect version.
Normal Completed 13m (x2 over 26m) weblogic.operator Domain governancedomain is complete because all of the following are true: there is no failure detected, there are no pending server shutdowns, and all servers expected to be running are ready and at their target image, auxiliary images, restart version, and introspect version.
Normal Available 13m (x2 over 26m) weblogic.operator Domain governancedomain is available: a sufficient number of its servers have reached the ready state.
In the Status
section of the output, the available servers and
clusters are listed.
Verifying the Pods
Run the following command to view the pods and the nodes they are running
on:
kubectl get pods -n <domain_namespace> -o wide
For
example:kubectl get pods -n oigns -o wide
The
output will look similar to the
following:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
governancedomain-adminserver 1/1 Running 0 24m 10.244.1.42 worker-node2 <none> <none>
governancedomain-create-fmw-infra-sample-domain-job-8cww8 0/1 Completed 0 52m 10.244.1.40 worker-node2 <none> <none>
governancedomain-oim-server1 1/1 Running 0 52m 10.244.1.44 worker-node2 <none> <none>
governancedomain-soa-server1 1/1 Running 0 21m 10.244.1.43 worker-node2 <none> <none>
helper 1/1 Running 0 3h55m 10.244.1.39 worker-node2 <none> <none>
Configuring the Ingress
If the domain deploys successfully, and all the above checks are verified, you are ready to configure the Ingress. See, Configuring Ingress.