Creating an OKE Worker Node Pool
Learn how to create OKE worker node pools on Compute Cloud@Customer for a workload cluster.
Nodes are Compute Cloud@Customer compute instances. When you create a worker node pool, you specify the number of nodes to create and other parameters that define instances.
You can't customize the OKE
cloud-init
scripts.
To configure proxy settings, use the CLI or API to set the proxy in node metadata. If the cluster is using VCN-Native Pod Networking, add 169.254.169.254 to the noproxy setting.
-
On the Compute Cloud@Customer Console dashboard, click Containers / View Kubernetes Clusters (OKE).
If the cluster to which you want to attach a node pool is not listed, select a different compartment from the compartment menu at the top of the page.
-
Click the name of the cluster to which you want to add a node pool.
-
On the cluster details page, under Resources, click Node Pools.
-
On the Node Pools list, click Add Node Pool.
-
In the Add Node Pool dialog box, provide the following information:
-
Name: The name of the new node pool. Avoid entering confidential information.
-
Compartment: The compartment in which to create the new node pool.
-
Node Pool Options: In the Node Count field, enter the number of nodes you want in this node pool. The default is 0. The maximum number is 128 per cluster, which can be distributed across multiple node pools.
-
Network Security Group: If you check the box to enable network security groups, click Add Network Security Group and select an NSG from the drop-down list. You might need to change the compartment to find the NSG you want. The primary VNIC from the worker subnet will be attached to this NSG.
-
Placement configuration
-
Worker Node Subnet: Select a subnet that has configuration such as the "worker" subnet described in Creating a Worker Subnet (Flannel Overlay) or Creating a Worker Subnet (VCN-Native Pod). For a public cluster, create the NAT private version of the "worker" subnet. For a private cluster, create the VCN-only private version of the "worker" subnet. Select only one subnet. The subnet must have rules set to communicate with the control plane endpoint. The subnet must use a private route table and must have a security list like the worker-seclist security list described in Creating a Worker Subnet (Flannel Overlay) or Creating a Worker Subnet (VCN-Native Pod).
-
Fault Domain: Select a fault domain, or select Automatically select the best fault domain, which is the default option.
-
-
Source Image: Select an image.
-
Select the Platform Image Source Type.
-
Select an image from the list.
The image list has columns Operating System, OS Version, and Kubernetes Version. You can use the drop-down menu arrow to the right of the OS Version or Kubernetes Version to select a different version.
If the image that you want to use is not listed, use the CLI procedure and specify the OCID of the image. To get the OCID of the image you want, use the
ce node-pool get
command for a node pool where you used this image before.Note
The image that you specify must not have a Kubernetes version that's newer than the Kubernetes version that you specified when you created the cluster. The Kubernetes Version for the cluster is in a column of the cluster list table.
-
-
Shape: Select a shape for the worker nodes. The shape is VM.PCAStandard.E5.Flex and you can't change it.
Specify the number of OCPUs you want. You can optionally specify the total amount of memory you want. The default value for gigabytes of memory is 16 times the number you specify for OCPUs. Click inside each value field to see the minimum and maximum allowed values.
-
Boot Volume: (Optional) Check the box to specify a custom boot volume size.
Boot volume size (GB): The default boot volume size for the selected image is shown. To specify a larger size, enter a value from 50 to 16384 in gigabytes (50 GB to 16 TB) or use the increment and decrement arrows.
If you specify a custom boot volume size, you need to extend the partition to take advantage of the larger size. Oracle Linux platform images include the
oci-utils
package. Use theoci-growfs
command from that package to extend the root partition and then grow the file system. Seeoci-growfs. -
Pod Communication (VCN-Native Pod Networking clusters only)
Pod Communication Subnet: Select a subnet that has configuration like the "pod" subnet described in Creating a Pod Subnet (VCN-Native Pod).
Number of Pods per node: The maximum number of pods that you want to run on a single worker node in a node pool. The default value is 31. You can enter a number from 1 to 110. The number of VNICs allowed by the shape you specify (see "Shape" above) limits this maximum pods number. See Node Shapes and Number of Pods. To conserve the pod subnet's address space, reduce the maximum number of pods you want to run on a single worker node. This reduces the number of IP addresses that are pre-allocated in the pod subnet.
If you check the box to Use Security Rules in Network Security Group (NSG), select the Add Network Security Group button and select an NSG from the drop-down list. You might need to change the compartment to find the NSG you want. Secondary VNICs from the pod subnet will be attached to this NSG.
-
Cordon and Drain: (Optional) Use the arrows to decrease or increase the number of minutes of eviction grace duration. You cannot deselect "Force terminate after grace period." Nodes are deleted after their pods are evicted or at the end of the eviction grace duration, even if not all pods are evicted.
For descriptions of cordon and drain and eviction grace duration, click the CLI tab on this page, and see Node and node pool deletion settings.
-
SSH Key: The public SSH key for the worker nodes. Either upload the public key file or copy and paste the content of the file.
-
Kubernetes Labels: Click the Add Kubernetes Label button and enter a key name and value. You can use these labels to target pods for scheduling on specific nodes or groups of nodes. See the description and example in the CLI procedure.
-
Node Pool Tags: Defined or free-form tags for the node pool resource.
Note
Don't specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the node pool resource.
-
Node Tags: Defined or free-form tags that are applied to every node in the node pool.
Important
Don't specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated.
-
-
Click Add Node Pool.
The details page for the node pool is displayed. Under Resources, click Work Requests to see the progress of the node pool creation and see nodes being added to the Nodes list. The work request status is Accepted until the cluster is in either Active state or Failed state.
To identify these nodes in a list of instances, note that the names of these nodes are in the format
oke-ID
, whereID
is the first 32 characters after thepca_name
in the node pool OCID. Search for the instances in the list whose names contain theID
string from this node pool OCID.To identify these nodes in a list of instances, note that the names of these nodes contain the first 32 characters after the
ccc_name
of the node pool OCID, and they contain the cluster OCID in the OraclePCA-OKE/cluster_id tag.
What's Next:
-
Configure any registries or repositories that the worker nodes need. Ensure you have access to a self-managed public or intranet container registry to use with the OKE service and your application images.
-
Create a service to expose containerized applications outside the Compute Cloud@Customer. See Exposing Containerized Applications.
-
Create persistent storage for applications to use. See Adding Storage for Containerized Applications.
To change the properties of existing nodes, you could instead create a new node pool with the new settings and move the work to the new nodes.
-
Use the oci ce node-pool create command and required parameters to create a new node pool.
oci ce node-pool create --cluster-id <cluster_OCID> --compartment-id <compartment_OCID> --name <pool_name> --node-shape <node_shape_name> [OPTIONS]
-
Get the information you need to run the command.
-
The OCID of the compartment where you want to create the node pool:
oci iam compartment list
-
The OCID of the cluster for this node pool:
oci ce cluster list
-
The name of the node pool. Avoid entering confidential information.
-
The placement configuration for the nodes, including the worker subnet OCID and fault domain. See the "Placement configuration" description in the Console procedure. Use the following command to show the content and format of this option:
$ oci ce node-pool create --generate-param-json-input placement-configs
Use the following command to list fault domains:
oci iam fault-domain list
. Don't specify more than one fault domain or more than one subnet in the placement configuration. To allow the system to select the best fault domains, don't specify any fault domain. -
(VCN-Native Pod Networking clusters only) The OCID of the pod subnet. See Creating a Pod Subnet (VCN-Native Pod). See also the description in Pod Communication in the preceding Console procedure. Use the
--pod-subnet-ids
option. Although the--pod-subnet-ids
option value is an array, you can specify only one pod subnet OCID.The maximum number of pods that you want to run on a single worker node in a node pool. Use the
--max-pods-per-node
option. The default value is 31. You can enter a number from 1 to 110. The number of VNICs allowed by the shape you specify (see "The name of the shape" below) limits this maximum pods number. See Node Shapes and Number of Pods. To conserve the pod subnet's address space, reduce the maximum number of pods you want to run on a single worker node. This reduces the number of IP addresses that are pre-allocated in the pod subnet.(Optional) The OCID of the Network Security Group to use for the pods in this node pool. Use the
--pod-nsg-ids
option. You can specify up to five NSGs. -
The OCID of the image to use for the nodes in this node pool.
Use the following command to get the OCID of the image that you want to use:
$ oci compute image list --compartment-id compartment_OCID
If the image that you want to use isn't listed, you can get the OCID of the image from the output of the
ce node-pool get
command for a node pool where you used this image before.Note
The image that you specify must have
-OKE-
in itsdisplay-name
and must not have a Kubernetes version that's newer than the Kubernetes version that you specified when you created the cluster.The Kubernetes version for the cluster is shown in
cluster list
output. The Kubernetes version for the image is shown in thedisplay-name
property inimage list
output. The Kubernetes version of the following image is 1.29.9."display-name": "uln-pca-Oracle-Linux8-OKE-1.29.9-20250325.oci"
Don't specify the
--kubernetes-version
option in thenode-pool create
command.You can specify a custom boot volume size in gigabytes. The default boot volume size is 50 GB. To specify a custom boot volume size, use the
--node-source-details
option to specify both the boot volume size and the image. You can't specify both--node-image-id
and--node-source-details
. Use the following command to show the content and format of the node source details option.$ oci ce node-pool create --generate-param-json-input node-source-details
If you specify a custom boot volume size, you need to extend the partition to take advantage of the larger size. Oracle Linux platform images include the
oci-utils
package. Use theoci-growfs
command from that package to extend the root partition and then grow the file system. See oci-growfs. -
The name of the shape of the worker nodes in this node pool. For Compute Cloud@Customer X10 systems, the shape of the control plane nodes is VM.PCAStandard.E5.Flex.
Specify the shape configuration, as shown in the following example. You must provide a value for
ocpus
. ThememoryInGBs
property is optional; the default value in gigabytes is 16 times the number ofocpus
.--node-shape-config '{"ocpus": 32, "memoryInGBs": 512}'
Note
Allocate at least 2 OCPUs and 32 GB memory for every 10 running pods. You might need to allocate more resources, depending on the workloads that are planned. See Resource Management for Pods and Containers.
-
(Optional) The OCID of the Network Security Group to use for the nodes in this node pool. Use the
option. Do not specify more than one NSG.--nsg-ids
-
(Optional) Labels. Setting labels on nodes enables you to target pods for scheduling on specific nodes or groups of nodes. Use this functionality to ensure that specific pods only run on nodes with certain isolation, security, or regulatory properties.
Use the
--initial-node-labels
option to add labels to the nodes. Labels are a list of key/value pairs to add to nodes after they join the Kubernetes cluster. There are metadata key restrictions. See Metadata Key Restrictions.The following is an example label to apply to the nodes in the node pool:
--initial-node-labels '[{"key":"disktype","value":"ssd"}]
An easy way to select nodes based on their labels is to use
nodeSelector
in the pod configuration. Kubernetes only schedules the pod onto nodes that have each of the labels that are specified in thenodeSelector
section.The following example excerpt from a pod configuration specifies that pods that use this configuration must be run on nodes that have the
ssd
disk type label:nodeSelector: disktype: ssd
-
(Optional) Node metadata. Use the
option to attach custom user data to nodes. See the following proxy settings item for a specific example.--node-metadata
See Metadata Key Restrictions. The maximum size of node metadata is 32,000 bytes.
-
(Optional) Proxy settings. If your network requires proxy settings to enable worker nodes to reach outside registries or repositories, for example, create an argument for the
option.--node-metadata
In the
option argument, provide values for--node-metadata
crio-proxy
andcrio-noproxy
as shown in the following example file argument:{ "crio-proxy": "http://<your_proxy>.<your_domain_name>:<your_port>", "crio-noproxy": "localhost,127.0.0.1,<your_domain_name>,ocir.io,<Kubernetes_cidr>,<pods_cidr>" }
If the cluster is using VCN-Native Pod Networking, add 169.254.169.254 to the noproxy setting, as in the following example:
"crio-noproxy": "localhost,127.0.0.1,your_domain_name,ocir.io,Kubernetes_cidr,pods_cidr,169.254.169.254"
-
(Optional) Node and node pool deletion settings. You can specify how to handle node deletion when you delete a node pool, delete a specified node, decrement the size of the node pool, or change the node pool nodes placement configuration. These node deletion parameters can also be set or changed when you update the node pool, delete a specified node, or delete the node pool.
To specify node pool deletion settings, create an argument for the
--node-eviction-node-pool-settings
option. You can specify the eviction grace duration (evictionGraceDuration
) for nodes. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration.-
Eviction grace duration. This value specifies the amount of time to allow to cordon and drain worker nodes.
A node that's cordoned can't have new pods placed on it. Existing pods on that node aren't affected.
When a node is drained, each pod's containers terminate gracefully and perform any necessary cleanup.
The eviction grace duration value is expressed in ISO 8601 format. The default value and the maximum value are 60 minutes (PT60M). The minimum value is 20 seconds (PT20S). OKE always tries to drain nodes for at least 20 seconds.
-
Force delete. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration. After the default or specified eviction grace duration, the node is deleted, even if one or more pod containers aren't completely drained.
The following shows an example argument for the
--node-eviction-node-pool-settings
option. If you include theisForceDeleteAfterGraceDuration
property, then its value must betrue
. Nodes are always deleted after their pods are evicted or at the end of the eviction grace duration.--node-eviction-node-pool-settings '{"evictionGraceDuration": "PT30M", "isForceDeleteAfterGraceDuration": true}'
Note
If you use Terraform and you specify
node_eviction_node_pool_settings
, then you must explicitly setis_force_delete_after_grace_duration
to true, even though true is the default value. Theis_force_delete_after_grace_duration
property setting isn't optional if you're using Terraform. -
-
(Optional) Tags. Add free-form tags for the node pool resource by using the
--defined-tags
or--freeform-tags
options. Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated and only applied to nodes (instances), not to the node pool resource.To add free-form tags to all nodes in the node pool, use the
--node-defined-tags
and--node-freeform-tags
options.Important
Do not specify values for the OraclePCA-OKE.cluster_id defined tag or for the ClusterResourceIdentifier free-form tag. These tag values are system-generated.
-
-
Run the create node pool command.
Example:
See the Using the Console procedure for information about the options shown in this example, and other options such as
--node-boot-volume-size-in-gbs
andnsg-ids
. The--pod-subnet-ids
option is only applicable if the cluster uses VCN-Native Pod Networking$ oci ce node-pool create \ --cluster-id ocid1.cluster.unique_ID --compartment-id ocid1.compartment.unique_ID \ --name node_pool_name --node-shape shape_name --node-image-id ocid1.image.unique_ID \ --placement-configs '[{"availabilityDomain":"AD-1","subnetId":"ocid1.subnet.unique_ID"}]' \ --pod-subnet-ids '["ocid1.subnet.unique_ID"]' --size 10 --ssh-public-key "public_key_text"
Use the
work-request get
command to check the status of the node pool create operation. The work request OCID is increated-by-work-request-id
in themetadata
section of thecluster get
output.$ oci ce work-request get --work-request-id workrequest_OCID
The work request status will be
ACCEPTED
until the cluster is in either Active state or Failed state.To identify these nodes in a list of instances, note that the names of these nodes are in the format
oke- ID
, whereID
is the first 32 characters after the name in the node pool OCID. Search for the instances in the list whose names contain theID
string from this node pool OCID.
For a complete list of CLI commands, flags, and options, see the Command Line Reference.
What's Next:
-
Configure any registries or repositories that the worker nodes need. Ensure you have access to a self-managed public or intranet container registry to use with the OKE service and your application images.
-
Create a service to expose containerized applications outside the Compute Cloud@Customer. See Exposing Containerized Applications.
-
Create persistent storage for applications to use. See Adding Storage for Containerized Applications.
To change the properties of existing nodes, you could instead create a new node pool with the new settings and move the work to the new nodes.
-
Use the CreateNodePool operation to create a new node pool.
For information about using the API and signing requests, see REST APIs and Security Credentials. For information about SDKs, see Software Development Kits and Command Line Interface.
-
Configure any registries or repositories that the worker nodes need. Ensure you have access to a self-managed public or intranet container registry to use with the OKE service and your application images.
-
Create a service to expose containerized applications outside the Compute Cloud@Customer. See Exposing Containerized Applications.
-
Create persistent storage for applications to use. See Adding Storage for Containerized Applications.
To change the properties of existing nodes, you could instead create a new node pool with the new settings and move the work to the new nodes.
-