Monitor a Domain and Publish Logs
After the Oracle SOA Suite domain is set up, you can:
Monitor the Oracle SOA Suite instance using Prometheus and Grafana
Using the WebLogic Monitoring Exporter you can scrape runtime information from a running Oracle SOA Suite instance and monitor them using Prometheus and Grafana.
Set up monitoring, follow these steps to set up monitoring for an Oracle SOA Suite instance. For more details on WebLogic Monitoring Exporter, see here.
Publish WebLogic Server logs into Elasticsearch
WebLogic Server logs can be published to Elasticsearch using Fluentd. See Fluentd configuration steps.
Publish SOA server diagnostics logs into Elasticsearch
This section shows you how to publish diagnostics logs to Elasticsearch and view them in Kibana. For publishing operator logs, see this sample.
If you have not already set up Elasticsearch and Kibana for logs collection, refer to this document and complete the setup.
The diagnostics or other logs can be pushed to Elasticsearch server using logstash pod. The logstash pod should have access to the shared domain home or the log location. In case of the Oracle SOA Suite domain, the persistent volume of the domain home can be used in the logstash pod. To create the logstash pod, follow these steps:
- Get the domain home persistence volume claim details of the domain home of the
Oracle SOA Suite domain. The following command lists the persistent volume claim
details in the namespace -
soans
. In the example below, the persistent volume claim issoainfra-domain-pvc
:kubectl get pvc -n soans
Sample output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE soainfra-domain-pvc Bound soainfra-domain-pv 10Gi RWX soainfra-domain-storage-class xxd
- Create the logstash configuration file (
logstash.conf
). Below is a sample logstash configuration to push diagnostic logs of all servers available at DOMAIN_HOME/servers/<server_name>/logs/-diagnostic.log:input { file { path => "/u01/oracle/user_projects/domains/soainfra/servers/**/logs/*-diagnostic.log" start_position => beginning } } filter { grok { match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ] } } output { elasticsearch { hosts => ["elasticsearch.default.svc.cluster.local:9200"] } }
- Copy the
logstash.conf
into/u01/oracle/user_projects/domains
so that it can be used for logstash deployment, using the Administration Server pod (for examplesoainfra-adminserver
pod in namespacesoans
):kubectl cp logstash.conf soans/soainfra-adminserver:/u01/oracle/user_projects/domains --namespace soans
- Create a deployment YAML (
logstash.yaml
) for the logstash pod using the domain home persistence volume claim. Make sure to point the logstash configuration file to the correct location (for example, copy logstash.conf to /u01/oracle/user_projects/domains/logstash.conf) and also the correct domain home persistence volume claim. Below is a sample logstash deployment YAML:apiVersion: apps/v1 kind: Deployment metadata: name: logstash-soa namespace: soans spec: selector: matchLabels: app: logstash-soa template: # create pods using pod definition in this template metadata: labels: app: logstash-soa spec: volumes: - name: soainfra-domain-storage-volume persistentVolumeClaim: claimName: soainfra-domain-pvc - name: shared-logs emptyDir: {} containers: - name: logstash image: logstash:6.6.0 command: ["/bin/sh"] args: ["/usr/share/logstash/bin/logstash", "-f", "/u01/oracle/user_projects/domains/logstash.conf"] imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /u01/oracle/user_projects name: soainfra-domain-storage-volume - name: shared-logs mountPath: /shared-logs ports: - containerPort: 5044 name: logstash
- Deploy logstash to start publish logs to
Elasticsearch:
kubectl create -f logstash.yaml
- Now, you can view the diagnostics logs using Kibana with index pattern “logstash-*”.