14.3 Verifying the Pods

  1. Run the following command to check the logstash pod is created correctly:
    kubectl get pods -n <namespace>
    For example:
    kubectl get pods -n oamns
    
    The output should look similar to the following:
    NAME                                            READY   STATUS      RESTARTS   AGE
    accessdomain-adminserver                                 1/1     Running     0          18h
    accessdomain-oam-policy-mgr1                             1/1     Running     0          18h
    accessdomain-oam-server1                                 1/1     Running     1          18h
    oam-logstash-bbbdf5876-85nkd                             1/1     Running     0          4m23s
    
    Wait a couple of minutes to make sure the logstash pod has not had any failures or restarts. If the pod fails you can view the pod log using:
    kubectl logs -f oam-logstash-<pod> -n oamns
    Most errors occur due to misconfiguration of the logstash_cm.yaml or logstash.yaml. This is usually because of an incorrect value set, or the certificate was not pasted with the correct indentation.
    If the pod has errors, delete the pod and ConfigMap as follows:
    kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash.yaml
    
    kubectl delete -f $WORKDIR/kubernetes/elasticsearch-and-kibana/logstash_cm.yaml
    

    Once you have resolved the issue in the yaml files, run the commands outlined earlier to recreate the ConfigMap and logstash pod.