13.3 Verifying the Pods

  1. Run the following command to check the logstash pod is created correctly:
    kubectl get pods -n <namespace>
    For example:
    kubectl get pods -n oigns
    The output should look similar to the following:
    NAME                                            READY   STATUS      RESTARTS   AGE
    governancedomain-adminserver                    1/1     Running     0          90m
    governancedomain-oim-server1                    1/1     Running     0          88m
    governancedomain-soa-server1                    1/1     Running     0          88m
    oig-logstash-77fbbc66f8-lsvcw                   1/1     Running     0          3m25s
    
    Wait a couple of minutes to make sure the logstash pod has not had any failures or restarts. If the pod fails you can view the pod log using:
    kubectl logs -f oig-logstash-<pod> -n oigns
    
    Most errors occur due to misconfiguration of the logstash_cm.yaml or logstash.yaml. This is usually because of an incorrect value set, or the certificate was not pasted with the correct indentation.
    If the pod has errors, delete the pod and ConfigMap as follows:
    kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash.yaml
    kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash_cm.yaml
    
    Once you have resolved the issue in the yaml files, run the commands outlined earlier to recreate the ConfigMap and logstash pod.