7 Logging

You can use Oracle Cloud Infrastructure (OCI) or external tools to configure persistent logging for Oracle Blockchain Platform Enterprise Edition.

Overview

Oracle Blockchain Platform Enterprise Edition is based on Kubernetes, where logs are stored locally on each pod. To prevent logs from being deleted when a pod is deleted, you must set up persistent logging, where logs are stored in a central location. There are two methods you can use for persistent logging. You can use an external logging tool such as Fluentd and Elastic Stack. Alternately, if you are running on Oracle Kubernetes Engine, you can use the centralized logging solution supported by Oracle Cloud Infrastructure (OCI).

Persistent Logging with OCI

To store logs centrally using OCI, you define log groups and configure agents to parse the logs. The logs are stored in the Object Storage service. Before you configure persistent logging with OCI, your deployment must meet the following requirements.
  • A dynamic group in the Kubernetes compartment. To create a dynamic group, click Identity & Security in the navigation menu. Under Identity, click Domains and then click Create dynamic group. Add the following to your dynamic group in the Matching rules section, substituting the Oracle Cloud ID for your compartment.
    instance.compartment.id = '<compartment_ocid>'
    For example:
    instance.compartment.id = 'ocid1.compartment.oc1..aaaaaaaa4ws3242343243244nyb423432rwqsxigt2sia'
  • A policy that allows the dynamic group to interact with the logging service. To create a policy, click Identity & Security in the navigation menu. Under Identity, click Policies and then click Create Policy.
    Allow dynamic-group <my-group> to use log-content in compartment <target_compartment_name>
    For example:
    Allow dynamic-group okelogging to use log-content in compartment BlockchainTeam
After you have satisfied the prerequisites, complete the following steps to store logs centrally using OCI.
  1. Click the menu icon in the upper left corner, search for log, and then select Logs.
  2. Create a log group. Under Logging, select Log Groups and then click Create Log Group.
  3. Create a custom log. Under Logging, select Logs and then click Create custom log to open the Create custom log wizard. Select the log group that you created previously.
  4. On the second page of the Create custom log wizard, create an agent configuration for the custom log, specifying the Kubernetes compartment and the dynamic group.
  5. In the Configure log inputs section of the Agent configuration page, configure the log input for the agent to use the following file path, which is the default for application containers. Select Log path from the Input type list. Enter the following file path for File paths. This path includes all container logs, including system and service containers.
    /var/log/pods/*/*/*.log
  6. Wait until logs are ingested. Typically, logs are ingested in 3-5 minutes.
  7. Select Logs and then navigate to the custom log and click Explore Log. You can analyze, parse, and filter the logs.
  8. You can also use OCI to store the logs in the Object Storage service.
    1. Create a connector. Under Logging, select Connectors and then click Create Connector. Select Logging as the Source and Object Storage as the Target.
    2. Configure the source and target as needed.
    3. Under the Enable logs section, set Enable Log to Enabled for the connector that you created. The Create Log panel is displayed, with a default value for log retention time.
    4. Wait until logs are ingested. Typically, logs are ingested in 3-5 minutes. You can then see read and write operations in the connector logs. Logs are now being written to the Object Storage service.

For more information, see Monitor Kubernetes and OKE clusters with OCI Logging Analytics.

Persistent Logging with External Tools

You can store logs centrally using Fluentd and Elastic Stack. The following steps have been tested with Fluentd v1.16.2 and Elasticsearch 7.9.1. Use these versions or later when you complete these steps.
  1. Create a Kubernetes namespace called fluentd.
  2. Use the following command to create a role-based access control resource.
    kubectl create -f fluentd-rbac.yaml -n fluentd
    Use the following fluentd-rbac.yaml file with the command.
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: fluentd
      namespace: fluentd
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: fluentd
      namespace: fluentd
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - namespaces
      verbs:
      - get
      - list
      - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: fluentd
    roleRef:
      kind: ClusterRole
      name: fluentd
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: fluentd
      namespace: fluentd
  3. Use the following command to create a ConfigMap object for Fluentd or Elastic Stack.
    kubectl create -f fluentd-configmap_removefilter_ascii.yaml -n fluentd
    Use the following fluentd-configmap_removefilter_ascii.yaml file with the command.
    In the following file, remove the number sign (#) to uncomment only one of the following lines.
    • Uncomment @include file-fluent.conf if you are writing to a file in the /tmp/obp.log path.
    • Uncomment @include elastic-fluent.conf if you are writing to Elasticsearch.
    The following file shows an example of writing to the /tmp/obp.log path.
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: fluentd-config
      namespace: fluentd
    data:
      fluent.conf: |-
        ################################################################
        # This source gets all logs from local docker host
        #@include pods-kind-fluent.conf
        #@include pods-fluent.conf
        @include pods-nofilter.conf
        @include file-fluent.conf
        #@include elastic-fluent.conf
      pods-nofilter.conf: |-
        <source>
          @type tail
          path /var/log/containers/*.log
          format //^(?<time>.+) (?<stream>stdout|stderr) (?<logtag>.)? (?<log>.*)$/ /
          pos_file /var/log/fluentd-containers.log.pos
          tag kubernetes.*
          read_from_head true
        </source>
        <filter kubernetes.**>
         @type kubernetes_metadata
        </filter>
      file-fluent.conf: |-
         <match kubernetes.var.log.containers.**fluentd**.log>
          @type null
         </match>
         <match kubernetes.var.log.containers.**kube-system**.log>
           @type null
         </match>
         <match kubernetes.**>
           @type file
           path /tmp/obp.log
         </match>
      elastic-fluent.conf: |-
        <match kubernetes.var.log.containers.**fluentd**.log>
          @type null
        </match>
        <match kubernetes.var.log.containers.**kube-system**.log>
          @type null
        </match>
        <match kubernetes.**>
          @type elasticsearch
          host "#{ENV['FLUENT_ELASTICSEARCH_HOST'] || 'elasticsearch.elastic-kibana'}"
          port "#{ENV['FLUENT_ELASTICSEARCH_PORT'] || '9200'}"
          index_name fluentd-k8s-3
          type_name fluentd
          include_timestamp true
        </match>
  4. Use the following command to create a DaemonSet object for Fluentd. This command creates a Fluentd pod on each node.
    kubectl create -f fluentd.yaml -n fluentd
    Use the following fluentd.yaml file with the command.
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: fluentd
      namespace: fluentd
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      selector:
        matchLabels:
          k8s-app: fluentd-logging
          version: v1
      template:
        metadata:
          labels:
            k8s-app: fluentd-logging
            version: v1
        spec:
          serviceAccount: fluentd
          serviceAccountName: fluentd
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          - key: node-role.kubernetes.io/control-plane
            effect: NoSchedule 
          containers:
          - name: fluentd1
            imagePullPolicy: "Always"
            image: fluent/fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch7-1.1
            env:
              - name:  FLUENT_ELASTICSEARCH_HOST
                value: "elasticsearch.elastic-kibana"
              - name:  FLUENT_ELASTICSEARCH_PORT
                value: "9200"
            resources:
              limits:
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
            volumeMounts:
            - name: fluentd-config
              mountPath: /fluentd/etc
            - name: logs
              mountPath: /tmp
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
          terminationGracePeriodSeconds: 30
          volumes:
          - name: fluentd-config
            configMap:
              name: fluentd-config
          - name: varlog
            hostPath:
              path: /var/log
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers
          - name: logs
            hostPath:
              path: /tmp
    The Oracle Blockchain Platform logs are available in the /tmp directory of the Fluentd pod or the Kubernetes node.
  5. To send the logs to Elastic Stack, create a Kubernetes namespace called elastic-kibana.
  6. Use the following command to create a deployment for Elastic Stack and to expose it as a service.
    kubectl create -f elastic.yaml -n elastic-kibana
    Use the following elastic.yaml file with the command.
    apiVersion: v1
    kind: Namespace
    metadata:
      name: elastic-kibana
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: elasticsearch
      namespace: elastic-kibana
      labels:
        app: elasticsearch
    spec:
      selector:
        matchLabels:
          app: elasticsearch
      replicas: 1
      template:
        metadata:
          labels:
            app: elasticsearch
        spec:
          initContainers:
          - name: vm-max-fix
            image: busybox
            command: ["sysctl", "-w", "vm.max_map_count=262144"]
            securityContext:
              privileged: true
          containers:
          - name: elasticsearch
            image: elasticsearch:7.9.1
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9200
            env:
            - name: node.name
              value: "elasticsearch"
            - name: cluster.initial_master_nodes
              value: "elasticsearch"
            - name: bootstrap.memory_lock
              value: "false"
            - name: ES_JAVA_OPTS
              value: "-Xms512m -Xmx512m"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: elasticsearch
      namespace: elastic-kibana
      labels:
        app: elasticsearch
    spec:
      type: ClusterIP
      selector:
        app: elasticsearch
      ports:
        - protocol: TCP
          name: http
          port: 9200
          targetPort: 9200
  7. You can then use the following commands to examine the log data in the Elasticsearch index.
    curl -X GET "localhost:9200/_cat/indices/fluentd-k8s-*?v=true&s=index&pretty"
    curl -X GET "localhost:9200/fluentd-k8s-3/_search?pretty=true"
  8. You can also use Fluentd to store the log on the local block volume on each node.
    1. Create a block volume for each node, attach the volume, and create a directory called /u01.
    2. Format the attached block volume for the ext4 file system.
    3. Mount the /u01 directory on the device path.
    4. Change the Fluentd deployment file (fluentd.yaml) so that the logs volume is /u01, not /tmp, as shown in the following snippet.
      - name: logs
              hostPath:
                path: /u01
    5. Run the following command to apply the Fluentd deployment.
      kubectl apply -f fluentd.yaml -n fluentd
    6. The logs are now visible in the /u01 directory on each node.

Set the Log Level for Operator Pods

You can set the log level for the hlf-operator and obp-operator pods. The steps to set the log level are different depending on whether Oracle Blockchain Platform is installed. If Oracle Blockchain Platform Enterprise Edition is not yet installed, complete the following steps.

  1. Open the corresponding deployment.yaml file for editing. The file for the hlf-operator pod is in the following location:
    distribution_package_location/distribution-package/operators/helmcharts/hlf-operator/templates/deployment.yaml
    The file for the obp-operator pod is in the following location:
    distribution_package_location/distribution-package/operators/helmcharts/obp-operator/templates/deployment.yaml
  2. Add the following line to the file. As shown in the comment, you can set the log level to debug, info, or error. In the following example the log level is set to info.
    --zap-log-level=info # debug, info, error
After you edit the file, that section of the file might look similar to the following text:
containers:
    - args:
    - --enable-leader-election
    - --zap-log-level=info # debug, info, error

If Oracle Blockchain Platform is already installed, complete the following step.

  • Use the following commands to edit the deployment definitions for the hlf-operator and obp-operator pods. Add or update the argument that configures the log level for the manager container under the pod template specification.
    kubectl edit deployment -n obp-cp obp-operator
    kubectl edit deployment -n obp-cp hlf-operator-controller-manager
After you update the container arguments, the deployment definition might look similar to the following text:
containers:
    - args:
    - --enable-leader-election
    - --zap-log-level=info # debug, info, error