3.3 Key Components Used By an OAM Deployment
An Oracle Access Management (OAM) deployment uses the Kubernetes components such as pods and Kubernetes services.
Container Image
A container image is an immutable, static file that includes executable code. When deployed into Kubernetes, it is the container image that is used to create a pod. The image contains the system libraries, system tools, and Oracle binaries required to run in Kubernetes. The image shares the OS kernel of its host machine.
A container image is compiled from file system layers built onto a parent or base image. These layers promote the reuse of various components. So, there is no need to create everything from scratch for every project.
A pod is based on a container image. This container image is read-only. Each pod has its own instance of a container image.
A container image contains all the software and libraries required to run the product. It does not require the entire operating system. Many container images do not include standard operating utilities such as the vi editor or ping.
When you upgrade a pod, you are actually instructing the pod to use a different container image. For example, if the container image for Oracle Access Management is based on the July Critical Patch Update (CPU), then to upgrade the pod to use the October CPU image, you have to tell the pod to use the October CPU image and restart the pod. Further information on upgrading can be found in Patching and Upgrading.
Oracle containers are built using a specific user and group ID. Oracle supplies its container images using the user ID 1000 and group ID 0. To enable writing to file systems or persistent volumes, you should grant the write access to this user ID. Oracle supplies all container images using this user and group ID.
If your organization already uses this user or group ID, you should reconfigure the image to use different IDs. This feature is outside the scope of this document.
Pods
A pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. A pod's contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific logical host that contains one or more application containers which are relatively tightly coupled.
In an Oracle Access Management (OAM) deployment, each OAM server runs in a different pod.
- You or the Node Controller deletes the node object.
- The kubelet on the unresponsive node starts responding, terminates the pod, and removes the entry from the apiserver.
- You force delete the pod.
Oracle recommends the best practice of using the first or the second approach. If a node is confirmed to be dead (for example: permanently disconnected from the network, powered down, and so on), delete the node object. If the node suffers from a network partition, try to resolve the issue or wait for the partition to heal. When the partition heals, the kubelet completes the deletion of the pod and frees up its name in the apiserver.
Typically, the system completes the deletion if the pod is no longer running on a node or an administrator has deleted it. You may override this by force deleting the pod.
Pod Scheduling
By default, Kubernetes will schedule a pod to run on any worker node that has sufficient capacity to run that pod. In some situations, it may be desirable that scheduling occurs on a subset of the worker nodes available. This type of scheduling can be achieved by using Kubernetes labels.
Persistent Volumes
When a pod is created, it is based on a container image. A container image is supplied by Oracle for the products you are deploying. When a pod gets created, a runtime environment is created based upon that image. That environment is refreshed with the container image every time the pod is restarted. This means that any changes you make inside a runtime environment are lost whenever the container gets restarted.
A persistent volume is an area of disk, usually provided by NFS that is available to the pod but not part of the image itself. This means that the data you want to keep, for example the OAM domain configuration, is still available after you restart a pod, that is to say, that the data is persistent.
- Mount the PV to the pod directly, so that wherever the pod starts in the cluster the PV is available to it. The upside to this approach is that a pod can be started anywhere without extra configuration. The downside to this approach is that there is one NFS volume which is mounted to the pod. If the NFS volume becomes corrupted, you will have to either revert to a backup or have to failover to a disaster recovery site.
- Mount the PV to the worker node and have the pod interact with it as if it was a
local file system. The advantages of this approach are that you can have different
NFS volumes mounted to different worker nodes, providing built-in redundancy. The
disadvantages of this approach are:
- Increased management overhead.
- Pods have to be restricted to nodes that use a specific version of the file system. For example, all odd numbered pods use odd numbered worker nodes mounted to file system 1, and all even numbered pods use even numbered worker nodes mounted to file system 2.
- File systems have to be mounted to every worker node on which a pod may be started. This requirement is not an issue in a small cluster, unlike in a large cluster.
- Worker nodes become linked to the application. When a worker node undergoes maintenance, you need to ensure that file systems and appropriate labels are restored.
If maximum redundancy and availability is your goal, then you should adopt this solution.
Kubernetes Services
Kubernetes services expose the processes running in the pods regardless of the number of pods that are running. For example, OAM servers, each running in different pods will have a service associated with them. This service will redirect your request to the individual pods in the cluster.
Kubernetes services can be internal or external to the cluster. Internal services are of
the type ClusterIP and external services are of the type NodePort
.
Some deployments use a proxy in front of the service. This proxy is typically provided by an 'Ingress' load balancer such as Ngnix. Ingress allows a level of abstraction to the underlying Kubernetes services.
When using Kubernetes, NodePort Services have a similar result as using Ingress. In the NodePort mode, Ingress allows for consolidated management of these services.
This guide describes how to use Ingress using the Nginx Ingress Controller.
The Kubernetes services use a small port range. Therefore, when a Kubernetes service is created, there will be a port mapping. For instance, if a pod is using port 7001, then a Kubernetes/Ingress service may use 30701 as its port, mapping port 30701 to 7001 internally. It is worth noting that if you are using individual NodePort Services, then the corresponding Kubernetes service port will be reserved on every worker node in the cluster.
Kubernetes/ingress services are known to each worker node, regardless of the worker node on which the containers are running. Therefore, a load balancer is often placed in front of the worker node to simplify routing and worker node scalability.
To interact with a service, you have to refer to it using the format:
worker_node_hostname:Service
port.
- Load balancer
- Direct proxy calls
- DNS CNames
Ingress Controller
There are two ways of interacting with your Kubernetes services. You can create an externally facing service for each Kubernetes object you want to access. This type of service is known as the Kubernetes NodePort Service. Alternatively, you can use an ingress service inside the Kubernetes cluster to redirect requests internally.
Ingress is a proxy server which sits inside the Kubernetes cluster, unlike the NodePort Services which reserve a port per service on every worker node in the cluster. With an ingress service, you can reserve single ports for all HTTP / HTTPS traffic. An Ingress service has the concept of virtual hosts and can terminate SSL, if required. There are various implementations of Ingress. However, this guide describes the installation and configuration of NGNIX. The installation will be similar for other Ingress services but the command syntax may be different. Therefore, when you use a different Ingress, see the appropriate vendor documentation for the equivalent commands. Ingress can proxy HTTP, HTTPS, LDAP, and LDAPS protocols. Ingress is not mandatory
- Load Balancer: Load balancer provides an external IP address to which you can connect to interact with the Kubernetes services.
- NodePort: In this mode, Ingress acts as a simple load balancer between the Kubernetes services. The difference between using an Ingress NodePort Service as opposed to individual node port services is that the Ingress controller reserves one port for each service type it offers. For example, one for all HTTP communications, another for all LDAP communications, and so on. Individual node port services reserve one port for each service and type used in an application.
Domain Name System
Every service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client pod's DNS search list includes the pod's own namespace and the cluster's default domain.
-
Services
Record Type: A or AAAA record
Name format:
my-svc.namespace.svc.cluster-example.com
-
Pods
Record Type: A or AAAA record
Name format:
podname.namespace.pod.cluster-example.com
Kubernetes uses a built-in DNS server called 'CoreDNS' which is used for the internal name resolution.
External name resolution (names used outside of the cluster, for example: loadbalancer.example.com) may not possible inside the Kubernetes cluster. If you encounter this issue, you can use one of the following options:- Option 1 - Add a secondary DNS server to CoreDNS for the company domain.
- Option 2 - Add individual host entries to CoreDNS for the external hosts.
Namespaces
Namespaces enable you to organize clusters into virtual sub-clusters which are helpful when different teams or projects share a Kubernetes cluster. You can add any number of namespaces within a cluster, each logically separated from others but with the ability to communicate with each other.
In this guide the OAM deployment uses the namespace oamns
.