Nodes
Introduces the node types in a Kubernetes cluster.
Kubernetes Node architecture is described in detail at:
Control Plane Nodes
Describes Kubernetes control plane nodes.
The control plane node is responsible for cluster management and for exposing the API that's used to configure and manage resources within the Kubernetes cluster. Kubernetes control plane node components can be run within Kubernetes itself, as a set of containers within a dedicated pod. These components can be replicated for High Availability (HA) of the control plane nodes.
The following components are required for a control plane node:
-
API Server (
kube-apiserver
): The Kubernetes REST API is exposed by the API Server. This component processes and validates operations and then updates information in the Cluster State Store to trigger operations on the worker nodes. The API is also the gateway to the cluster. -
Cluster State Store (
etcd
): Configuration data relating to the cluster state is stored in the Cluster State Store, which can roll out changes to the coordinating components such as the Controller Manager and the Scheduler. It's important to have a backup plan in place for the data stored in the Cluster State Store. -
Cluster Controller Manager (
kube-controller-manager
): This manager is used to perform many cluster-level functions and overall application management, based on input from the Cluster State Store and the API Server. -
Scheduler (
kube-scheduler
): The Scheduler automatically decides where to run containers by monitoring availability of resources, quality of service, and affinity specifications.
The control plane node can be configured as a worker node within the cluster. Therefore, the
control plane node also runs the standard node services: the kubelet
service,
the container runtime, and the kube-proxy
service. Note that it's possible to
taint a node to prevent workloads from running on an inappropriate node. The
kubeadm
utility automatically taints the control plane node so that no
other workloads or containers can run on this node. This ensures that the control plane node
is never placed under any unnecessary load and simplifies the backup and restore of the
control plane node.
If the control plane node becomes unavailable for a period, the ability to change the cluster state is suspended but the worker nodes continue to run container applications without interruption.
For single node clusters, when the control plane node is offline, the API is unavailable, so the environment is unable to respond to node failures and no new operations that affect the overall cluster state, such as creating new resources or editing or moving existing resources, can be performed.
An HA cluster, with several control plane nodes, ensures that more requests for control plane node functionality can be handled, and control plane replica nodes help improve cluster uptime.
Control Plane Replica Nodes
Describes Kubernetes control plane replica nodes.
Control plane replica nodes are responsible for duplicating the functionality and data contained on control plane nodes within a Kubernetes cluster configured for HA. To improve uptime and resilience, you can host control plane replica nodes in different zones, and configure them to load balance for the Kubernetes cluster.
Replica nodes are designed to mirror the control plane node configuration and the current cluster state in real time. If the control plane nodes become unavailable, the Kubernetes cluster can fail over to the replica nodes automatically. If a control plane node fails, the API remains available so that the cluster can continue to respond automatically to other node failures and service requests for creating and editing existing resources within the cluster.
Worker Nodes
Describes Kubernetes worker nodes.
Worker nodes within the Kubernetes cluster are used to run containerized applications and handle networking to route traffic between applications within and outside of the cluster. The worker nodes perform any actions triggered by the Kubernetes API, which runs on the control plane node.
All nodes within a Kubernetes cluster must run the following services:
-
Kubelet Service (
kubelet
): The agent that controls communication between each worker node and the API Server running on the control plane node. This agent is also responsible for managing pod tasks, such as mounting volumes, starting containers, and reporting status. -
Container Runtime: An environment where containers can be run. In this release, the container runtimes are either runC or Kata Containers. For more information about the container runtimes, see Creating Kata Containers.
-
Kube Proxy Service (
kube-proxy
): A service that translates service definitions to networking rules. These handle port forwarding and IP redirects to ensure that network traffic from outside the pod network can be transparently proxied to the pods in a service.
In all cases, these services are run from systemd
as daemons.