12 Installing the Monitoring and Visualization Software
This chapter includes the following topics:
- Installing the Logging and Visualization Software
Elasticsearch enables you to aggregate logs from various products on your system. You can analize the logs and use the Kibana console to visualize the data in the form of charts and graphs. Elasticsearch recommends the best practice of using the API keys for connecting to a centralized Elasticsearch deployment. - Installing the Monitoring Software
Prometheus and Grafana help monitor your environment. The instructions given in this chapter are for a simple deployment by using thekube-prometheus
installer.
Parent topic: Configuring the Enterprise Deployment
Installing the Logging and Visualization Software
This section includes the following topics:
- Kubernetes Services
The Kubernetes services are created as part of the Monitoring and Visualization deployment process. - Variables Used in this Section
This section provide instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment. - Prerequisites
The latest releases of Elasticsearch uses the Elasticsearch Operator. The Elasticsearch Operator deploys an elastic cluster using the Kubernetes stateful sets. Stateful sets create dynamic persistent volumes. - Installing Elasticsearch (ELK) Stack and Kibana
The instructions help you deploy a simple ELK cluster. This is sufficient for testing. In production environments, you should obtain appropriate licenses from the vendor.
Parent topic: Installing the Monitoring and Visualization Software
Kubernetes Services
The Kubernetes services are created as part of the Monitoring and Visualization deployment process.
Table 12-1 Kubernetes Services
Service Name | Type | Service Port | Mapped Port |
---|---|---|---|
|
ClusterIP |
9600 |
|
|
NodePort |
31800 |
6501 |
|
NodePort |
31920 |
9200 |
Note:
The mapped port is randomly assigned at install time. The values provided in this table are examples only.Parent topic: Installing the Logging and Visualization Software
Variables Used in this Section
This section provide instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.
Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.
Table 12-2 List of Variables
Variable | Sample Value | Description |
---|---|---|
<ELKNS> |
|
The name of the Elasticsearch namespace. |
<ELK_OPER_VER> |
|
The version of the Elasticsearch Operator. |
<ELK_VER> |
|
The version of Elasticsearch/Kibana you want to install. |
<ELK_USER> |
|
The name of the user for logstash to access Elasticsearch. |
<ELK_PASSWORD> |
< |
The password for ELK_USER. |
<ELK_K8> |
|
The Kubernetes port used to access Elasticsearch externally. |
<ELK_KIBANA_K8> |
|
The Kubernetes port used to access Kibana externally. |
<DH_USER> |
|
The user name for Docker Hub. |
<DH_PWD> |
|
The password for Docker Hub. |
<PV_SERVER> |
|
The name of the NFS server. Note: This name should be resolvable inside the Kubernetes cluster. |
<ELK_SHARE> |
|
The NFS mount point for the ELK persistent volumes. |
Parent topic: Installing the Logging and Visualization Software
Prerequisites
The latest releases of Elasticsearch uses the Elasticsearch Operator. The Elasticsearch Operator deploys an elastic cluster using the Kubernetes stateful sets. Stateful sets create dynamic persistent volumes.
Before installing Elasticsearch, you should ensure that you have a default Kubernetes storage class defined for your environment that allows dynamic storage. Each vendor has its own storage provider but it may not be configured to provide dynamic storage allocation.
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
oci (default) kubernetes.io/is-default-class Delete Immediate false 6d21h
If you do not have a storage provider that allows you to dynamically create storage, you can use an external NFS storage provider such as NFS subdir external provisioner.
For more information on storage classes, see Storage Classes.
For more information on NFS subdir external provisioner, see Kubernetes NFS Subdir External Provisioner.
For completeness, the following steps show how to install Elasticsearch and Kibana by using NFS subdir:
Parent topic: Installing the Logging and Visualization Software
Creating a Filesystem for the ELK Data
Before you can deploy the NFS client, you need to create a mount point/export on your NFS storage for storing your ELK data. This mount point is used by the NFS subdir external provider.
Parent topic: Prerequisites
Installing Elasticsearch (ELK) Stack and Kibana
For setting up the production security, see Configure Security for the Elastic Stack.
This section includes the following topics:
- Setting Up a Product Specific Work Directory
Before you begin the installation, you should have already downloaded and staged the ELK container image or should be using the Oracle Container Registry and the code repository. - Creating a Kubernetes Namespace
- Creating a Kubernetes Secret for Docker Hub Images
- Installing the Elasticsearch Operator
- Creating an Elasticsearch Cluster
- Creating a Kibana Cluster
- Creating the Kubernetes Services
- Granting Access to Logstash
- Accessing the Kibana Console
- Creating a Kibana Index
Parent topic: Installing the Logging and Visualization Software
Setting Up a Product Specific Work Directory
Before you begin the installation, you should have already downloaded and staged the ELK container image or should be using the Oracle Container Registry and the code repository.
See Identifying and Obtaining Software Distributions for an Enterprise Deployment. This section describes the procedure to copy the downloaded sample deployment scripts to a temporary working directory for ELK.
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Kubernetes Namespace
The Kubernetes namespace is used to store the Elasticsearch stack.
Use the following command to create the namespace for ELK:
kubectl create namespace <ELKNS>
kubectl create namespace elkns
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Kubernetes Secret for Docker Hub Images
This secret allows Kubernetes to pull an image from
hub.docker.com
which contains the Elasticsearch images.
hub.docker.com
.
Use the following command to create a Kubernetes secret for
hub.docker.com
:
kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="<DH_USER>" --docker-password="<DH_PWD>" --namespace=<ELKNS>
kubectl create secret docker-registry dockercred --docker-server="https://index.docker.io/v1/" --docker-username="username" --docker-password="mypassword" --namespace=elkns
kubectl create secret -n <ELKNS> docker-registry <REGISTRY_SECRET_NAME> --docker-server=<REGISTRY_ADDRESS> --docker-username=<REG_USER> --docker-password=<REG_PWD>
kubectl create secret -n elkns docker-registry regcred --docker-server=iad.ocir.io/mytenancy --docker-username=mytenancy/oracleidentitycloudservice/myemail@email.com --docker-password=<password>
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Installing the Elasticsearch Operator
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating an Elasticsearch Cluster
To create an Elasticsearch cluster using the Elasticsearch Operator, perform the following steps:
- Creating a Configuration File
- Creating the Elasticsearch Cluster
- Copying the Elasticsearch Certificate
- Elasticsearch Access Details
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Configuration File
Parent topic: Creating an Elasticsearch Cluster
Creating the Elasticsearch Cluster
Parent topic: Creating an Elasticsearch Cluster
Copying the Elasticsearch Certificate
Logstash requires access to the Elasticsearch CA (Certificate Authority) certificate to connect to the Elasticsearch server. Logstash places a copy of the certificate into config maps loaded into each namespace from where logstash runs.
In a production environment, it is recommended that you use production certificates. However, if you have allowed Elasticsearch to create its own self-signed certificates, you should copy this certificate to your work directory for easy access later.
Copy the self-signed certificates to your work directory by using the following command:
kubectl cp <ELKNS>/elasticsearch-es-default-0:/usr/share/elasticsearch/config/http-certs/..data/ca.crt <WORKDIR>/ELK/elk.crt
kubectl cp elkns/elasticsearch-es-default-0:/usr/share/elasticsearch/config/http-certs/..data/ca.crt /workdir/ELK/elk.crt
Parent topic: Creating an Elasticsearch Cluster
Elasticsearch Access Details
After the cluster starts, you will need the following information to interact with it:
Parent topic: Creating an Elasticsearch Cluster
Credentials
elastic
. You
can obtain the password for the user elastic
by using the
command:kubectl get secret elasticsearch-es-elastic-user -n <ELKNS> -o go-template='{{.data.elastic | base64decode}}'
kubectl get secret elasticsearch-es-elastic-user -n elkns -o go-template='{{.data.elastic | base64decode}}'
Parent topic: Elasticsearch Access Details
URL
The URL for sending logs can be determined using the command:
https://elasticsearch-es-http.<ELKNS>.svc.cluster.local:9200/
https://elasticsearch-es-http.elkns.svc.cluster.local:9200/
Parent topic: Elasticsearch Access Details
Creating a Kibana Cluster
To create an Kibana cluster, perform the following steps:
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Configuration File
<WORKDIR>/ELK/kibana.yaml
with the following
contents:apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: <ELKNS>
spec:
version: <ELK_VER>
count: 1
elasticsearchRef:
name: elasticsearch
Note:
If you are using your own container registry, you should add the following lines above:spec:
image: iad.ocir.io/mytenancy/idm/kibana/kibana:<ELK_VER>
podTemplate:
spec:
imagePullSecrets:
- name: regcred
Parent topic: Creating a Kibana Cluster
Creating the Kubernetes Services
If you are using an Ingress controller, it is possible to expose these services through Ingress. However, if you want to send the Ingress log files to Elasticsearch, it makes sense not to make them dependent on each other.
You should create two NodePort Services to access Elasticsearch and Kibana.
- A NodePort Service for the external Elasticsearch interactions. For example, to send logs from sources outside the cluster and to make API calls to the ELK cluster.
- Another, to access the Kibana console.
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a NodePort Service for Kibana
Parent topic: Creating the Kubernetes Services
Creating a NodePort Service for Elasticsearch
Parent topic: Creating the Kubernetes Services
Granting Access to Logstash
In a production deployment, you would have enabled Elasticsearch security. Security has two parts - SSL communication between Logstash and Elasticsearch, and an API key or a user name and password combination to gain access.
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Role and a User for Logstash
Note:
Elasticsearch recommends the use of API keys instead of user names and passwords. For more information, see https://www.elastic.co.Oracle recommends that you create a dedicated role and user for this purpose.
elastic
'
be encoded. To encode user elastic
, use the following
command:echo -n elastic:<ELASTIC PASSWORD> | base64
To obtain the password for the elastic
user, see Credentials.
Parent topic: Granting Access to Logstash
Creating an API Key for Logstash
Elastic search recommends that you use an API key to access Elasticsearch. To create an API Key using the command line option:
Parent topic: Granting Access to Logstash
Accessing the Kibana Console
http://k8workers.example.com:30094/app/kibana
elastic
. You can obtain the password using the
following
command:kubectl get secret elasticsearch-es-elastic-user -n $ELKNS -o go-template='{{.data.elastic | base64decode}}'
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Creating a Kibana Index
Parent topic: Installing Elasticsearch (ELK) Stack and Kibana
Installing the Monitoring Software
kube-prometheus
installer. See kube-prometheus. For more information
and documentation on the Prometheus product, see Prometheus.
Before starting the installation process, ensure that you use the version of Prometheus that is supported with your Kubernetes release. See the Prometheus/Kubernetes compatibility matrix.
This section includes the following topics:
- Kubernetes Services
- Variables Used in this Section
This section provides instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment. - Installing Prometheus and Grafana
Each of the product chapters show how to send monitoring data to Prometheus and Grafana. This section explains how to install the Prometheus and Grafana software. - About Grafana Dashboards
Parent topic: Installing the Monitoring and Visualization Software
Kubernetes Services
The Kubernetes services created as part of the installation are:
Service Name | Type | Service Port | Mapped Port |
---|---|---|---|
|
NodePort |
32101 |
9090 |
|
NodePort |
32100 |
3000 |
|
NodePort |
32102 |
9093 |
Parent topic: Installing the Monitoring Software
Variables Used in this Section
This section provides instructions to create a number of files. These sample files contain variables which you need to substitute with values applicable to your deployment.
Variables are formatted as <VARIABLE_NAME>. The following table provides the values you should set for each of these variables.
Table 12-3 List of Variables
Variable | Sample Value | Description |
---|---|---|
<PROMNS> |
|
The name of the Kubernetes namespace to use for the deployment. |
<IMAGE_VER> |
|
The version of Prometheus to install. |
<PROM_GRAF_K8> |
|
The Kubernetes port used to access Grafana externally. |
<PROM _K8> |
|
The Kubernetes port used to access Prometheus externally. |
<PROM_ALERT_K8> |
|
The Kubernetes port used to access Alert Manager externally. |
Parent topic: Installing the Monitoring Software
Installing Prometheus and Grafana
Each of the product chapters show how to send monitoring data to Prometheus and Grafana. This section explains how to install the Prometheus and Grafana software.
The installation process consists of the following steps:
- Setting Up a Product Specific Work Directory
- Downloading the Prometheus Installer
- Creating a Kubernetes Namespace
- Creating a Helm Override File
- Deploying Prometheus and Grafana
- Validating the Installation
Parent topic: Installing the Monitoring Software
Setting Up a Product Specific Work Directory
This section describes the procedure to copy the downloaded sample deployment scripts to a temporary working directory for Prometheus.
Note:
The same set of sample files are used by several products in this guide. To avoid having to download them each time, the files are staged in a non-product specific working directory.Parent topic: Installing Prometheus and Grafana
Downloading the Prometheus Installer
Parent topic: Installing Prometheus and Grafana
Creating a Kubernetes Namespace
To create a namespace, run the following command:
kubectl create namespace monitoring
The output appears as follows:
namespace/monitoring created
Parent topic: Installing Prometheus and Grafana
Creating a Helm Override File
<WORKDIR>/override_prom.yaml
to determine how the deployment is
created. This file will have the following
contents:alertmanager:
service:
nodePort: <PROM_ALERT_K8>
type: NodePort
prometheus:
image:
tag: <IMAGE_VER>
service:
nodePort: <PROM_K8>
type: NodePort
grafana:
image:
tag: <IMAGE_VER>
service:
nodePort: <PROM_GRAF_K8>
type: NodePort
adminPassword: <PROM_ADMIN_PWD>
This example uses the NodePort services because Prometheus is capable of monitoring Ingress and you do not want issues associated with Ingress from preventing access to Prometheus. Therefore, NodePort is used to keep it standalone.
Note:
docker.io
, add the following entries to the top of the
file:global:
imageRegistry: <REPOSITORY>
global:
ImageRegistry: iad.ocir.io/mytenancy/idm
Parent topic: Installing Prometheus and Grafana
Deploying Prometheus and Grafana
Parent topic: Installing Prometheus and Grafana
Validating the Installation
kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-kube-prome-alertmanager-0 2/2 Running 0 15h
pod/kube-prometheus-grafana-95944596-kcd9k 3/3 Running 0 15h
pod/kube-prometheus-kube-prome-operator-84c5bc5876-klvrs 1/1 Running 0 15h
pod/kube-prometheus-kube-state-metrics-5f9b85478f-qtwnz 1/1 Running 0 15h
pod/kube-prometheus-prometheus-node-exporter-9h86g 1/1 Running 0 15h
pod/kube-prometheus-prometheus-node-exporter-gbkgb 1/1 Running 0 15h
pod/kube-prometheus-prometheus-node-exporter-l99sb 1/1 Running 0 15h
pod/kube-prometheus-prometheus-node-exporter-r7d77 1/1 Running 0 15h
pod/kube-prometheus-prometheus-node-exporter-rnq42 1/1 Running 0 15h
pod/prometheus-kube-prometheus-kube-prome-prometheus-0 2/2 Running 0 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 15h
service/kube-prometheus-grafana NodePort 10.97.137.130 <none> 80:30900/TCP 15h
service/kube-prometheus-kube-prome-alertmanager NodePort 10.97.153.100 <none> 9093:30903/TCP 15h
service/kube-prometheus-kube-prome-operator ClusterIP 10.108.174.205 <none> 443/TCP 15h
service/kube-prometheus-kube-prome-prometheus NodePort 10.110.156.35 <none> 9090:30901/TCP 15h
service/kube-prometheus-kube-state-metrics ClusterIP 10.96.233.108 <none> 8080/TCP 15h
service/kube-prometheus-prometheus-node-exporter ClusterIP 10.107.188.115 <none> 9100/TCP 15h
service/prometheus-operated ClusterIP None <none> 9090/TCP 15h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-prometheus-node-exporter 5 5 5 5 5 <none> 15h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-grafana 1/1 1 1 15h
deployment.apps/kube-prometheus-kube-prome-operator 1/1 1 1 15h
deployment.apps/kube-prometheus-kube-state-metrics 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-grafana-95944596 1 1 1 15h
replicaset.apps/kube-prometheus-kube-prome-operator-84c5bc5876 1 1 1 15h
replicaset.apps/kube-prometheus-kube-state-metrics-5f9b85478f 1 1 1 15h
NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus-kube-prome-alertmanager 1/1 15h
statefulset.apps/prometheus-kube-prometheus-kube-prome-prometheus 1/1 15h
Parent topic: Installing Prometheus and Grafana
About Grafana Dashboards
Grafana dashboards are used to visualize information from your targets. There are different types of dashboards for different products. You should install a dashboard to monitor your Kubernetes environment.
The following dashboards are relevant to an Oracle Identity Management deployment:
Table 12-4 Dashboards Relevant to an Oracle Identity Management Deployment
Dashboard | Location | Description |
---|---|---|
Kubernetes |
Used to monitor the Kubernetes cluster. |
|
Nginx |
https://grafana.com/grafana/dashboards/9614-nginx-ingress-controller/ |
Used to monitor the Ingress controller. |
WebLogic |
|
Included in the Oracle download from GitHub. Used to Monitor the WebLogic domain. |
Apache |
Several Apache dashboards are available. This is an example. |
|
Oracle Database |
A sample database dashboard. |
Installing a Grafana Dashboard
- Download the Kubernetes Dashboard JSON file from the Grafana website. For example: https://grafana.com/grafana/dashboards/10856.
- Access the Grafana dashboard with the
http://<K8_WORKER1>:30900
URL and log in with admin/<PROM_ADMIN_PWD>. Change your password if prompted. - Click the search box at the top of the screen and select Import New Dashboard.
- Either drag the JSON file you downloaded in Step 1 to the Upload JSON File box or click the box and browse to the file. Click Import.
- When prompted, select the Prometheus data source. For example: Prometheus.
- Click Import. The dashboard is displayed in the Dashboards panel.
Parent topic: About Grafana Dashboards