Public and Private Clusters
On Compute Cloud@Customer, before you create a cluster, decide what kind of network access the cluster requires: whether you need a public cluster or a private cluster. You can't create both public and private clusters in one VCN.
The key difference between a public cluster and a private cluster is whether you configure public or private subnets for the Kubernetes API endpoint and the worker load balancer.
The subnets for the worker nodes and control plane nodes are always private.
For the worker nodes and control plane nodes, you can configure route rules that allow access only within the VCN or outside the VCN. This documentation names those route tables "vcn_private" and "nat_private." You can choose either of these private subnet configurations for your worker nodes and control plane nodes whether the cluster is private or the cluster is public.
Public Clusters
A public cluster requires the following network resources:
-
A public subnet for the Kubernetes API endpoint. See the instructions for creating a public "control-plane-endpoint" subnet in Creating an OKE Control Plane Subnet (Flannel Overlay) and Creating a Control Plane Subnet (VCN-Native Pod).
-
A public subnet for the worker load balancer. See the instructions for creating a public "service-lb" subnet in Creating a Worker Load Balancer Subnet (Flannel Overlay) and Creating a Worker Load Balancer Subnet (VCN-Native Pod).
-
An internet gateway to connect resources on a public subnet to the internet using public IP addresses.
-
A NAT gateway. Use a NAT gateway for outbound internet access. A NAT gateway connects resources on a private subnet to the internet without exposing private IP addresses.
-
At least three free public IP addresses. Free public IP addresses are required for the NAT gateway, control plane load balancer, and worker load balancer.
The worker load balancer requires a free public IP address to expose applications. The worker load balancer might require more free public IP addresses depending on the applications running on the pods.
Private Clusters
If you create multiple OKE VCNs, each CIDR must be unique. CIDRs of different VCNs for private clusters cannot overlap with any other VCN CIDRs or any on-premises CIDR. The IP addresses used must be exclusive to each VCN.
A private cluster has the following network resources:
-
A private subnet for the Kubernetes API endpoint. See the instructions for creating a private "control-plane-endpoint" subnet in Creating an OKE Control Plane Subnet (Flannel Overlay) andCreating a Control Plane Subnet (VCN-Native Pod) .
-
A private subnet for the worker load balancer. See the instructions for creating a private "service-lb" subnet in Creating an OKE Control Plane Load Balancer Subnet (Flannel Overlay) and Creating a Control Plane Load Balancer Subnet (VCN-Native Pod).
-
A route table with no route rules. This route table allows access only within the VCN.
-
(Optional) A Local Peering Gateway (LPG). Use an LPG to allow access from other VCNs. An LPG allows access to the cluster from an instance running on a different VCN. Create an LPG on the OKE VCN, and create an LPG on a second VCN on the Compute Cloud@Customer. Use the LPG connect command to peer the two LPGs. Peered VCNs can be in different tenancies. CIDRs for the peered VCNs can't overlap. See Connecting VCNs through a Local Peering Gateway.
Create a route rule to steer VCN subnet traffic to and from the LPGs, and security rules to allow or deny certain types of traffic. See Creating a VCN (Flannel Overlay) or Creating a VCN (VCN-Native Pod) for the route table to add to the OKE VCN and similar route table to add to the second VCN. Add the same route rule on the second VCN, specifying the OKE VCN CIDR as the destination.
Install the OCI SDK and
kubectl
on the instance on the second VCN and connect to the private cluster. See Creating a Kubernetes Configuration File. -
(Optional) A Dynamic Routing Gateway (DRG). Use a DRG to enable access from the on-premises network. A DRG allows traffic between the OKE VCN and the on-premises network's IP address space. Create the DRG in the OKE VCN compartment, and then attach the OKE VCN to that DRG. See Connecting to the On-Premises Network through a Dynamic Routing Gateway (DRG).
Create a route rule to steer traffic to the on-premises data center network's IP address space. See Creating a VCN (Flannel Overlay) or Creating a VCN (VCN-Native Pod) for the route table to add to the OKE VCN.