7 Logging
You can use Oracle Cloud Infrastructure (OCI) or external tools to configure persistent logging for Oracle Blockchain Platform Enterprise Edition.
- Persistent Logging with OCI
- Persistent Logging with External Tools
- Set the Log Level for Operator Pods
Overview
Oracle Blockchain Platform Enterprise Edition is based on Kubernetes, where logs are stored locally on each pod. To prevent logs from being deleted when a pod is deleted, you must set up persistent logging, where logs are stored in a central location. There are two methods you can use for persistent logging. You can use an external logging tool such as Fluentd and Elastic Stack. Alternately, if you are running on Oracle Kubernetes Engine, you can use the centralized logging solution supported by Oracle Cloud Infrastructure (OCI).
Persistent Logging with OCI
To store logs centrally using OCI, you define log groups and configure agents to parse the logs. The logs are stored in the Object Storage service. Before you configure persistent logging with OCI, your deployment must meet the following requirements.- A dynamic group in the Kubernetes compartment. To create a dynamic group,
click Identity & Security in the navigation menu. Under
Identity, click Domains and then click
Create dynamic group. Add the following to your dynamic group in
the Matching rules section, substituting the Oracle Cloud ID for
your
compartment.
For example:instance.compartment.id = '<compartment_ocid>'
instance.compartment.id = 'ocid1.compartment.oc1..aaaaaaaa4ws3242343243244nyb423432rwqsxigt2sia'
- A policy that allows the dynamic group to interact with the logging
service. To create a policy, click Identity & Security in the
navigation menu. Under Identity, click
Policies and then click Create
Policy.
For example:Allow dynamic-group <my-group> to use log-content in compartment <target_compartment_name>
Allow dynamic-group okelogging to use log-content in compartment BlockchainTeam
- Click the menu icon in the upper left corner, search for log, and then select Logs.
- Create a log group. Under Logging, select Log Groups and then click Create Log Group.
- Create a custom log. Under Logging, select Logs and then click Create custom log to open the Create custom log wizard. Select the log group that you created previously.
- On the second page of the Create custom log wizard, create an agent configuration for the custom log, specifying the Kubernetes compartment and the dynamic group.
- In the Configure log inputs section of the
Agent configuration page, configure the log input for the agent
to use the following file path, which is the default for application containers. Select
Log path from the Input type list. Enter
the following file path for File paths. This path includes all
container logs, including system and service
containers.
/var/log/pods/*/*/*.log
- Wait until logs are ingested. Typically, logs are ingested in 3-5 minutes.
- Select Logs and then navigate to the custom log and click Explore Log. You can analyze, parse, and filter the logs.
- You can also use OCI to store the logs in the Object Storage service.
- Create a connector. Under Logging, select Connectors and then click Create Connector. Select Logging as the Source and Object Storage as the Target.
- Configure the source and target as needed.
- Under the Enable logs section, set Enable Log to Enabled for the connector that you created. The Create Log panel is displayed, with a default value for log retention time.
- Wait until logs are ingested. Typically, logs are ingested in 3-5 minutes. You can then see read and write operations in the connector logs. Logs are now being written to the Object Storage service.
For more information, see Monitor Kubernetes and OKE clusters with OCI Logging Analytics.
Persistent Logging with External Tools
- Create a Kubernetes namespace called
fluentd
. - Use the following command to create a role-based access control
resource.
Use the followingkubectl create -f fluentd-rbac.yaml -n fluentd
fluentd-rbac.yaml
file with the command.apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: fluentd --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd namespace: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: fluentd
- Use the following command to create a
ConfigMap
object for Fluentd or Elastic Stack.
Use the followingkubectl create -f fluentd-configmap_removefilter_ascii.yaml -n fluentd
fluentd-configmap_removefilter_ascii.yaml
file with the command.In the following file, remove the number sign (#) to uncomment only one of the following lines.- Uncomment
@include file-fluent.conf
if you are writing to a file in the/tmp/obp.log
path. - Uncomment
@include elastic-fluent.conf
if you are writing to Elasticsearch.
/tmp/obp.log
path.apiVersion: v1 kind: ConfigMap metadata: name: fluentd-config namespace: fluentd data: fluent.conf: |- ################################################################ # This source gets all logs from local docker host #@include pods-kind-fluent.conf #@include pods-fluent.conf @include pods-nofilter.conf @include file-fluent.conf #@include elastic-fluent.conf pods-nofilter.conf: |- <source> @type tail path /var/log/containers/*.log format //^(?<time>.+) (?<stream>stdout|stderr) (?<logtag>.)? (?<log>.*)$/ / pos_file /var/log/fluentd-containers.log.pos tag kubernetes.* read_from_head true </source> <filter kubernetes.**> @type kubernetes_metadata </filter> file-fluent.conf: |- <match kubernetes.var.log.containers.**fluentd**.log> @type null </match> <match kubernetes.var.log.containers.**kube-system**.log> @type null </match> <match kubernetes.**> @type file path /tmp/obp.log </match> elastic-fluent.conf: |- <match kubernetes.var.log.containers.**fluentd**.log> @type null </match> <match kubernetes.var.log.containers.**kube-system**.log> @type null </match> <match kubernetes.**> @type elasticsearch host "#{ENV['FLUENT_ELASTICSEARCH_HOST'] || 'elasticsearch.elastic-kibana'}" port "#{ENV['FLUENT_ELASTICSEARCH_PORT'] || '9200'}" index_name fluentd-k8s-3 type_name fluentd include_timestamp true </match>
- Uncomment
- Use the following command to create a
DaemonSet
object for Fluentd. This command creates a Fluentd pod on each node.
Use the followingkubectl create -f fluentd.yaml -n fluentd
fluentd.yaml
file with the command.
The Oracle Blockchain Platform logs are available in theapiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: fluentd labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: serviceAccount: fluentd serviceAccountName: fluentd tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - key: node-role.kubernetes.io/control-plane effect: NoSchedule containers: - name: fluentd1 imagePullPolicy: "Always" image: fluent/fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch7-1.1 env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch.elastic-kibana" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: fluentd-config mountPath: /fluentd/etc - name: logs mountPath: /tmp - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: fluentd-config configMap: name: fluentd-config - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: logs hostPath: path: /tmp
/tmp
directory of the Fluentd pod or the Kubernetes node. - To send the logs to Elastic Stack, create a Kubernetes namespace called
elastic-kibana
. - Use the following command to create a deployment for Elastic Stack and to
expose it as a
service.
Use the followingkubectl create -f elastic.yaml -n elastic-kibana
elastic.yaml
file with the command.apiVersion: v1 kind: Namespace metadata: name: elastic-kibana --- apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: elastic-kibana labels: app: elasticsearch spec: selector: matchLabels: app: elasticsearch replicas: 1 template: metadata: labels: app: elasticsearch spec: initContainers: - name: vm-max-fix image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true containers: - name: elasticsearch image: elasticsearch:7.9.1 imagePullPolicy: IfNotPresent ports: - containerPort: 9200 env: - name: node.name value: "elasticsearch" - name: cluster.initial_master_nodes value: "elasticsearch" - name: bootstrap.memory_lock value: "false" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: elastic-kibana labels: app: elasticsearch spec: type: ClusterIP selector: app: elasticsearch ports: - protocol: TCP name: http port: 9200 targetPort: 9200
- You can then use the following commands to examine the log data in the
Elasticsearch
index.
curl -X GET "localhost:9200/_cat/indices/fluentd-k8s-*?v=true&s=index&pretty" curl -X GET "localhost:9200/fluentd-k8s-3/_search?pretty=true"
- You can also use Fluentd to store the log on the local block volume on
each node.
- Create a block volume for each node, attach the volume, and create a
directory called
/u01
. - Format the attached block volume for the ext4 file system.
- Mount the
/u01
directory on the device path. - Change the Fluentd deployment file (
fluentd.yaml
) so that the logs volume is/u01
, not/tmp
, as shown in the following snippet.- name: logs hostPath: path: /u01
- Run the following command to apply the Fluentd
deployment.
kubectl apply -f fluentd.yaml -n fluentd
- The logs are now visible in the /u01 directory on each node.
- Create a block volume for each node, attach the volume, and create a
directory called
Set the Log Level for Operator Pods
You can set the log level for the hlf-operator
and
obp-operator
pods. The steps to set the log level are different depending
on whether Oracle Blockchain
Platform is installed. If Oracle Blockchain
Platform Enterprise Edition is not yet installed, complete the following steps.
- Open the corresponding
deployment.yaml
file for editing. The file for thehlf-operator
pod is in the following location:
The file for thedistribution_package_location/distribution-package/operators/helmcharts/hlf-operator/templates/deployment.yaml
obp-operator
pod is in the following location:distribution_package_location/distribution-package/operators/helmcharts/obp-operator/templates/deployment.yaml
- Add the following line to the file. As shown in the comment, you can set the
log level to
debug
,info
, orerror
. In the following example the log level is set toinfo
.--zap-log-level=info # debug, info, error
containers:
- args:
- --enable-leader-election
- --zap-log-level=info # debug, info, error
If Oracle Blockchain Platform is already installed, complete the following step.
- Use the following commands to edit the deployment definitions for the
hlf-operator
andobp-operator
pods. Add or update the argument that configures the log level for themanager
container under the pod template specification.kubectl edit deployment -n obp-cp obp-operator kubectl edit deployment -n obp-cp hlf-operator-controller-manager
containers:
- args:
- --enable-leader-election
- --zap-log-level=info # debug, info, error