3 Installing Oracle Cloud Native Environment
This chapter discusses how to prepare the nodes to be used in an Oracle Cloud Native Environment deployment. When the nodes are prepared, they must be installed with the Oracle Cloud Native Environment software packages. When the nodes are set up with the software, you can use the Platform CLI to perform a deployment of a Kubernetes cluster and optionally install other modules.
This chapter shows you how to perform the steps to set up the hosts and install the Oracle Cloud Native Environment software, ready to perform a deployment of modules. When you have set up the nodes, deploy the Kubernetes module to install a Kubernetes cluster using the steps in Kubernetes Module.
Installation Overview
The high level overview of setting up Oracle Cloud Native Environment is described in this section.
To install Oracle Cloud Native Environment:
-
Prepare the operator node: An operator node is a host to be used to perform and manage the deployment of environments. The operator node must be set up with the Platform API Server, and the Platform CLI (
olcnectl). -
Prepare the Kubernetes nodes: The Kubernetes control plane and worker nodes must to be set up with the Platform Agent.
-
Set up a load balancer: If you're deploying a highly available Kubernetes cluster, set up a load balancer. You can set up an external load balancer, or use the container-based load balancer deployed by the Platform CLI.
-
Set up X.509 Certificates: X.509 Certificates are used to provide secure communication between the Kubernetes nodes. You must set up the certificates before you create an environment and perform a deployment.
-
Start the services: Start the Platform API Server and Platform Agent services on nodes using the X.509 Certificates.
-
Create an environment: Create an environment into which you can install the Kubernetes module and any other optional modules.
-
Deploy modules: Deploy the Kubernetes module and any other optional modules.
Setting up the Nodes
This section discusses setting up nodes to use in an Oracle Cloud Native Environment. The nodes are used to form a Kubernetes cluster.
An operator node is used to perform the deployment of the Kubernetes cluster using the Platform CLI and the Platform API Server. An operator node might be a node in the Kubernetes cluster, or a separate host. In examples in this book, the operator node is a separate host, and not part of the Kubernetes cluster.
On each Kubernetes node (both control plane and worker nodes) the Platform Agent must be installed. Before you set up the Kubernetes nodes, you must prepare them. For information on preparing the nodes, see Prerequisites.
During the installation of the required packages, an olcne user is created.
This user is used to start the Platform API Server or Platform Agent services and has the
minimum OS privileges to perform that task. Don't use the olcne user for any
other purpose.
Setting up the Operator Node
This section discusses setting up the operator node. The operator node is a host that's used to perform and manage the deployment of environments, including deploying the Kubernetes cluster.
To set up the operator node:
-
On the operator node, install the Platform CLI, Platform API Server, and utilities.
sudo dnf install olcnectl olcne-api-server olcne-utils -
Enable the
olcne-api-serverservice, but do not start it. Theolcne-api-serverservice is started when you configure the X.509 Certificates.sudo systemctl enable olcne-api-server.serviceFor information on configuration options for the Platform API Server, see Configuring the Platform API Server.
Setting up Kubernetes Nodes
This section discusses setting up the nodes to use in a Kubernetes cluster. Perform these steps on both Kubernetes control plane and worker nodes.
To set up the Kubernetes nodes:
-
On each node to be added to the Kubernetes cluster, install the Platform Agent package and utilities.
sudo dnf install olcne-agent olcne-utils -
Enable the
olcne-agentservice, but do not start it. Theolcne-agentservice is started when you configure the X.509 Certificates.sudo systemctl enable olcne-agent.serviceFor information on configuration options for the Platform Agent, see Configuring the Platform Agent.
-
If the
dockerservice is running, stop, and disable it.sudo systemctl disable --now docker.service -
If the
containerdservice is running, stop, and disable it.sudo systemctl disable --now containerd.service
Configuring a Proxy Server
If you use a proxy server, configure it with CRI-O on each Kubernetes node.
On each Kubernetes node:
-
Create a CRI-O
systemdconfiguration directory:sudo mkdir /etc/systemd/system/crio.service.d -
Create a file named
proxy.confin the directory, and add the proxy server information. For example:[Service] Environment="HTTP_PROXY=http://proxy.example.com:3128" Environment="HTTPS_PROXY=https://proxy.example.com:3128" Environment="NO_PROXY=mydomain.example.com" -
If you're also installing Calico (as a module or as the Kubernetes Container Network Interface), or the Multus module, add the Kubernetes service IP (the default is
10.96.0.1) to theNO_PROXYvariable:Environment="NO_PROXY=mydomain.example.com,10.96.0.1"
Setting up a Load Balancer for Highly Available Clusters
A highly available (HA) cluster needs a load balancer to provide high availability of control plane nodes. A load balancer communicates with the Kubernetes API Server on the control plane nodes.
The methods of setting up a load balancer to create an HA cluster are:
-
Using an external load balancer instance.
-
Using a load balancer provided by a cloud infrastructure, for example an Oracle Cloud Infrastructure load balancer.
-
Using the internal load balancer that can be deployed by the Platform CLI on the control plane nodes.
Setting up an External Load Balancer
To use an external load balancer implementation, it must be set up and ready to use before you perform an HA cluster deployment. The load balancer hostname and port is entered as an option when you create the Kubernetes module. The load balancer must be set up with the following configuration:
-
The listener listening on TCP port 6443.
-
The distribution set to round robin.
-
The target set to TCP port 6443 on the control plane nodes.
-
The health check set to TCP.
For more information on setting up an external load balancer, see Oracle Linux 9: Setting Up Load Balancing or Oracle Linux 8: Setting Up Load Balancing.
Setting up a Load Balancer on Oracle Cloud Infrastructure
To set up a load balancer on Oracle Cloud Infrastructure:
-
Sign-in to Oracle Cloud Infrastructure.
-
Create a load balancer.
-
Add a backend set to the load balancer using weighted round robin. Set the health check to be TCP port 6443.
-
Add the control plane nodes to the backend set. Set the port for the control plane nodes to port 6443.
-
Create a listener for the backend set using TCP port 6443.
For more information on setting up a load balancer in Oracle Cloud Infrastructure, see the Oracle Cloud Infrastructure documentation.
Setting up the Internal Load Balancer
Important:
Using the internal load balancer is not recommended for production deployments. Instead, use a correctly configured load-balancer that's outside the Kubernetes cluster, for example an own external load balancer, or a load balancer provided by a cloud infrastructure, such as an Oracle Cloud Infrastructure load balancer.
To use the internal load balancer deployed by the Platform CLI, you need to perform the following steps to prepare the control plane nodes.
To prepare control plane nodes for the load balancer deployed by the Platform CLI:
-
Set up the control plane nodes as described in Setting up Kubernetes Nodes.
-
Use the
--virtual-ipoption when creating the Kubernetes module to nominate a virtual IP address that can be used for the primary control plane node. This IP address must not be in use on any node and is assigned dynamically to the control plane node assigned as the primary controller by the load balancer. If the primary node fails, the load balancer reassigns the virtual IP address to another control plane node, and that, in turn, becomes the primary node.Tip:
If you're deploying to Oracle Cloud Infrastructure virtual instances, you can assign a secondary private IP address to the VNIC on a control plane node to create a virtual IP address. Ensure you list this control plane node first when creating the Kubernetes module. For more information on secondary private IP addresses, see the Oracle Cloud Infrastructure documentation.
-
On each control plane node, open port 6444. When you use a virtual IP address, the Kubernetes API server port is changed from the default of 6443 to 6444. The load balancer listens on port 6443 and receives the requests and passes them to the Kubernetes API server.
sudo firewall-cmd --add-port=6444/tcp sudo firewall-cmd --add-port=6444/tcp --permanent -
On each control plane node, enable the Virtual Router Redundancy Protocol (VRRP) protocol:
sudo firewall-cmd --add-protocol=vrrp sudo firewall-cmd --add-protocol=vrrp --permanent
Setting up Certificates for Kubernetes Nodes
Communication between the Kubernetes nodes is secured using X.509 certificates.
Before you deploy Kubernetes, you need to configure the X.509 certificates used to manage the communication between the nodes. You can use:
-
Vault: The certificates are managed using the HashiCorp Vault secrets manager. Certificates are created during the deployment of the Kubernetes module. You need to create a token authentication method for Oracle Cloud Native Environment.
-
CA Certificates: Using certificates signed by a trusted Certificate Authority (CA), and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually.
-
Private CA Certificates: Using generated certificates, signed by a private CA you set up, and copied to each Kubernetes node before the deployment of the Kubernetes module. These certificates are unmanaged and must be renewed and updated manually. A script is provided to help you set this up.
A software-based secrets manager is recommended to manage these certificates. The HashiCorp Vault secrets manager can be used to generate, assign, and manage the certificates. We recommend you implement an instance of Vault, setting up the appropriate security for the environment.
For more information on installing and setting up Vault, see the Hashicorp documentation.
If you don't want to use Vault, you can use certificates, signed by a trusted CA, and copied to each node. A script is provided to generate a private CA to generate certificates for each node. This script also gives you the commands needed to copy the certificates to the nodes.
Setting up Vault Authentication
To configure Vault for use with Oracle Cloud Native Environment, set up a Vault token with the following properties:
-
A PKI secret engine with a CA certificate or intermediate, at
olcne_pki_intermediary. -
A role under that PKI, named
olcne, configured to not require a common name, and allow any name. -
A token authentication method and policy that attaches to the
olcnerole and can request certificates.
For information on setting up the Vault PKI secrets engine to generate dynamic X.509 certificates, see:
https://developer.hashicorp.com/vault/docs/secrets/pki
For information on creating Vault tokens, see:
https://developer.hashicorp.com/vault/docs/commands/token/create
Setting up CA Certificates
This section shows you how to use certificates signed by a trusted CA, without using a secrets manager such as Vault. To use certificates, copy them to all Kubernetes nodes, and to the Platform API Server node.
To ensure the Platform Agent on each Kubernetes node, and the Platform API Server have
access to certificates, copy them into the /etc/olcne/certificates/ directory on each node. This is the default location of the certificates and is
used when setting up the Platform Agent and Platform API Server, and when creating an
environment. You can use another location for certificates, but this means you need to specify
the location for each certificate and key when you're setting up the services, and the
environment.
Tip:
You can use the olcnectl certificates copy command to copy the
certificates to the Kubernetes nodes.
The default location for certificates and keys, and the location used in this book, is the
/etc/olcne/certificates/ directory on each node.
-
CA Certificate:
/etc/olcne/certificates/ca.cert -
Node Key:
/etc/olcne/certificates/node.key -
Node Certificate:
/etc/olcne/certificates/node.cert
Setting up Private CA Certificates
This section shows you how to create a private CA, and use that to generate signed certificates for the nodes. This section also contains information on copying the certificates to the nodes. This section also contains information on generating certificates for nodes that you want to scale into a Kubernetes cluster.
Distribute Node Certificates
Use the Platform CLI to generate and distribute signed certificates to the Kubernetes nodes. You can use a CA Certificate or create one automatically. These certificates are used when starting the Platform API Server and Platform agent on nodes to secure communication between the Kubernetes nodes.
Before you create the certificates, ensure the user on the operator node that's creating
the certificates is a member of the olcne group. Use the syntax:
sudo usermod -a -G olcne username
For example, to add the olcne group to the oracle user,
on the operator node run:
sudo usermod -a -G olcne oracle
Important:
When you add a group to a user, you must log out of the terminal session, and back in again. This is required to apply the change.
olcnectl certificates distribute
command to generate and distribute private CA and certificates to the Kubernetes nodes. The
syntax to use is:
olcnectl certificates distribute
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Provide the nodes for which you want to create certificates using the
--nodes option. Create a certificate for each node that runs the Platform
API Server or Platform Agent. This means you must create certificates for the operator node,
and for each Kubernetes node.
Note:
If you're deploying a highly available Kubernetes cluster using a virtual IP address, you don't need to create a certificate for a virtual IP address.
For example:
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com
To set up the information for the private CA, use the --cert-request-*
options. The following example also includes options to set the SSH login information for
nodes using the --ssh-* options.
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--cert-request-common-name cloud.example.com \
--cert-request-country US \
--cert-request-locality "My Town" \
--cert-request-organization "My Company" \
--cert-request-organization-unit "My Company Unit" \
--cert-request-state "My State" \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle
If you have a CA Certificate and private key to generate certificates, you can provide it
using the --byo-ca-cert and --byo-ca-key options, for
example:
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key
The certificate files for each node are generated and saved in the $HOME/certificates/ directory, in a directory for
each hostname. If you used the --cert-dir option, they're saved to the
directory you specified.
The CA Certificate and private key are generated to the $HOME/certificates/ca/ directory. If you used the
--cert-dir option, they're saved to the directory you specified. If you
used a CA Certificate and key using the --byo-ca-* options, they're not
created, or saved.
The certificates for each node are copied to the /etc/olcne/certificates/ directory on each Kubernetes
node using SSH. This is the default directory for the location of the certificates when you
start the Platform API Server and Platform Agent.
Distribute Extra Node Certificates
Use the Platform CLI to generate and distribute signed certificates to the nodes using an existing CA.
This is useful to generate and distribute certificates for any extra nodes that you want to add to a Kubernetes cluster.
On the operator node, use the olcnectl certificates distribute command to
generate and distribute private CA and certificates to the Kubernetes nodes. The syntax to
use is:
olcnectl certificates distribute
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Provide the nodes for which you want to create certificates using the
--nodes option.
Provide the location of the existing CA Certificate using the
--byo-ca-cert option.
You can use the same CA certificate and private key you used to generate the Kubernetes
node certificates by using the --byo-ca-cert and
--byo-ca-key options.
For example:
olcnectl certificates distribute \
--nodes worker3.example.com,worker4.example.com \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key
The location of the CA Certificate and private key might be different if you used the
--cert-dir option of the olcnectl certificates
distribute command when creating the original certificates.
The certificate files for each node are generated and saved in the $HOME/certificates/ directory, in a directory for
each hostname.
The certificates for each node are copied to the /etc/olcne/certificates/ directory on each Kubernetes
node using SSH. This is the default directory for the location of the certificates when you
start the Platform API Server and Platform Agent.
Setting up Certificates for the Platform CLI to the Platform API Server
Communication between the Platform CLI and Platform API Server is secured using X.509 certificates.
We recommend you configure the X.509 certificates used to manage the communication between the Platform CLI and the Platform API Server. You can use:
-
Vault: The certificates are managed using the HashiCorp Vault secrets manager.
-
CA Certificates: Using a CA Certificate, signed by a trusted Certificate Authority.
-
Private CA Certificates: Using generated certificates, signed by a private CA you set up. These certificates are unmanaged and must be renewed and updated manually.
The certificates need to be on the node that contains the Platform CLI (this is most likely the operator node). The name of the directory reflects the name of the host that contains Platform API Server (again, this is most likely the operator node). The format of the directory for the certificates must be:
$HOME/.olcne/certificates/api_server_hostname:port
The api_server_hostname is the hostname of the node where the Platform API
Server is installed, and the port is the port to access the Platform API
Server (the default is 8091). This is the default directory for the Platform
API Server keys, and where the Platform CLI checks for access to the Platform API Server
without any extra configuration. For example:
$HOME/.olcne/certificates/operator.example.com:8091
Generate Certificates for the Platform CLI to the Platform API Server
Use the Platform CLI to generate signed certificates for the Platform CLI to access the Platform API Server.
On the operator node, use the
olcnectl certificates generate command to generate certificates for the
Platform CLI to access the Platform API Server. The syntax to use
is:
olcnectl certificates generate
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
{-n|--nodes} nodes
[--one-cert]
The --cert-dir option sets the location where the certificates are to be
saved. We recommend to use the the following format for the
directory:
$HOME/.olcne/certificates/api_server_hostname:port
The api_server_hostname is the hostname of the node where the Platform
API Server is installed, and the port is the port to access the Platform
API Server (the default is 8091). This is the default directory for the
Platform API Server keys, and where the Platform CLI checks for access the Platform API
Server without any extra configuration.
Important:
This is the path to specify the location of the certificates for the Platform CLI to
access the Platform API Server when you create an environment using the olcnectl
environment create command.
The --nodes option must be set to the hostname and IP address of
the Platform API Server, as shown:
--nodes api_server_hostname,api_server_ip_address
Use the --one-cert option to save the certificates for the hostname and for
the IP address to a single file.
You can use the same CA certificate and private key
you used to generate the Kubernetes node certificates by using the
--byo-ca-cert and --byo-ca-key options.
For example:
olcnectl certificates generate \
--nodes operator.example.com,127.0.0.1 \
--cert-dir $HOME/.olcne/certificates/operator.example.com:8091/ \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key \
--one-cert
In this example, the certificate files
(node.cert, node.csr, and node.key) are
created and saved in the directory:
$HOME/.olcne/certificates/operator.example.com:8091/
Copy the CA Certificate you used to generate the keys to this directory. For example:
cp $HOME/certificates/ca/ca.cert $HOME/.olcne/certificates/operator.example.com:8091/
The Platform CLI is set up to connect securely to the Platform API Server.
Setting up Certificates for the externalIPs Kubernetes Service
When you deploy Kubernetes, a service is deployed to the cluster that controls access to
externalIPs in Kubernetes services. The service is named
externalip-validation-webhook-service and runs in the
externalip-validation-system namespace. This Kubernetes service requires
X.509 certificates be set up before deploying Kubernetes. You can use Vault to generate the
certificates, or use existing certificates for this purpose. You can also generate
certificates using the Platform CLI. The certificates must be available on the operator node.
The examples in this book use the
/etc/olcne/certificates/restrict_external_ip/ directory for
these certificates.
Setting up Vault Certificates
You can use Vault to generate a certificates for the
externalIPs Kubernetes service. The Vault
instance must be configured in the same way as described in
Setting up Vault Authentication.
You need to generate certificates for two nodes, named:
externalip-validation-webhook-service.externalip-validation-system.svc
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
The certificate information must be generated in PEM format.
For example:
vault write olcne_pki_intermediary/issue/olcne \
alt_names=externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
format=pem_bundle The output is displayed. Look for the section that starts with certificate.
This section contains the certificates for the node names (set with the
alt_names option). Save the output in this section to a file named
node.cert. The file looks similar to:
-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK ... X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIID2TCCAsGgAwIBAgIUZ/M/D7bAjhyGx7DivsjBb9oeLhAwDQYJKoZIhvcNAQEL ... 9bRwnen+JrxUn4GV59GtsTiqzY6R2OKPm+zLl8E= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL ... kV8w2xVXXAehp7cg0BakVA== -----END CERTIFICATE-----
Look for the section that starts with issuing_ca. This section contains the
CA certificate. Save the output in this section to a file named ca.cert. The
file looks similar to:
-----BEGIN CERTIFICATE----- MIIDnDCCAoSgAwIBAgIUMapl4aWnBXE/02qTW0zOZ9aQVGgwDQYJKoZIhvcNAQEL ... kV8w2xVXXAehp7cg0BakVA== -----END CERTIFICATE-----
Look for the section that starts with private_key. This section contains
the private key for the node certificates. Save the output in this section to a file named
node.key. The file looks similar to:
-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAymg8uHy+mpwlelCyC4WrnfLwUmJ5vZmSos85QnIlZvyycUPK ... X3c8LNaJDfQx1wKfTc/c0czBhHYxgwfau0G6wjqScZesPi2xY0xyslE= -----END RSA PRIVATE KEY-----
Copy the three files (node.cert,
ca.cert and node.key) to
the operator node and set the ownership of the files as
described in Setting up CA Certificates.
Setting up CA Certificates
If you're using existing certificates, copy them to a directory under /etc/olcne/certificates/ on the operator node. For example:
-
CA Certificate:
/etc/olcne/certificates/restrict_external_ip/ca.cert -
Node Key:
/etc/olcne/certificates/restrict_external_ip/node.key -
Node Certificate:
/etc/olcne/certificates/restrict_external_ip/node.cert
Copy these certificates to a different location on the operator node than the certificates and keys used for the Kubernetes nodes as set up in Setting up Certificates for Kubernetes Nodes. This makes sure you don't overwrite those certificates and keys. You need to generate certificates for two nodes, named:
externalip-validation-webhook-service.externalip-validation-system.svc
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
Save the certificates for these two nodes as a single file named node.cert.
Ensure the permissions of the directory where the certificates are saved can be read by the
user on the operator node that you intend to use to run the olcnectl
commands to install Kubernetes.
sudo chown -R username:username /etc/olcne/certificates/restrict_external_ip/Setting up Private CA Certificates
You can use the Platform CLI to generate the certificates.
Generate Certificates for ExternalIPs Service
Use the Platform CLI to generate signed certificates for the Kubernetes ExternalIPs service.
On the operator node, use the
olcnectl certificates generate command to generate certificates for the
Kubernetes ExternalIPs service. The syntax to use
is:
olcnectl certificates generate
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
{-n|--nodes} nodes
[--one-cert]
The --cert-dir option sets the location where the certificates are to be
saved. We recommend you use the
directory:
$HOME/certificates/restrict_external_ip/
The
directory you use must have read and write permissions by the user you intend to use use to
run the olcnectl commands to install Kubernetes.
Important:
This is the path to specify the location of the ExternalIPs Kubernetes service
certificates when you create the Kubernetes module using the olcnectl module
create command.
The --nodes option must be set to the name of the Kubernetes
service, as shown:
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local
Use the --one-cert option to save the certificates for the two service
names to a single file.
You can use the same CA certificate and private key you used
to generate the Kubernetes node certificates by using the --byo-ca-cert and
--byo-ca-key options.
For example:
olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir $HOME/certificates/restrict_external_ip/ \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key \
--one-cert
In this example, the certificates are created and saved in the directory:
$HOME/certificates/restrict_external_ip/
Copy the CA certificate you used to generate the certificate into to this directory. If you used all the defaults and suggested paths, the following command copies the file:
cp $HOME/certificates/ca/ca.cert $HOME/certificates/restrict_external_ip/
The certificates are set up for the Kubernetes ExternalIPs service.
Starting the Platform API Server and Platform Agent Services
This section discusses using certificates to set up secure communication between the Platform API Server and the Platform Agent on nodes in the cluster. You can set up secure communication using certificates managed by Vault, or using certificates copied to each node. You must configure the Platform API Server and the Platform Agent to use the certificates when you start the services.
For information on setting up the certificates with Vault, see Setting up Certificates for Kubernetes Nodes.
For information on creating a private CA to sign certificates that can be used during testing, see Setting up Private CA Certificates.
Starting the Services Using Vault
This section shows you how to set up the Platform API Server and Platform Agent services to use certificates managed by Vault.
To set up and start the services using Vault:
-
On the operator node, use the
/etc/olcne/bootstrap-olcne.shscript to configure the Platform API Server to retrieve and use a Vault certificate. Use thebootstrap-olcne.sh --helpcommand for a list of options for this script. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --force-download-certs \ --olcne-component api-serverThe certificates are generated and downloaded from Vault.
By default, the certificates are saved to the
/etc/olcne/certificates/directory. You can optionally specify a path for the certificates, for example, by including the following options in thebootstrap-olcne.shcommand:--olcne-ca-path /path/ca.cert \ --olcne-node-cert-path /path/node.cert \ --olcne-node-key-path /path/node.key \The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-api-server.service -
On each Kubernetes node, use the
/etc/olcne/bootstrap-olcne.shscript to configure the Platform Agent to retrieve and use a certificate. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --force-download-certs \ --olcne-component agentThe certificates are generated and downloaded from Vault.
By default, the certificates are saved to the
/etc/olcne/certificates/directory. You can optionally specify a path for the certificates, for example, by including the following options in thebootstrap-olcne.shcommand:--olcne-ca-path /path/ca.cert \ --olcne-node-cert-path /path/node.cert \ --olcne-node-key-path /path/node.key \The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-agent.service
Starting the Services Using Certificates
This section shows you how to set up the Platform API Server and Platform Agent services to
use certificates which have been copied to each node. This example assumes the certificates
are available on all nodes in the /etc/olcne/certificates/ directory.
To set up and start the services using certificates:
-
On the operator node, use the
/etc/olcne/bootstrap-olcne.shscript to configure the Platform API Server to use the certificates. Use thebootstrap-olcne.sh --helpcommand for a list of options for this script. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component api-serverIf the certificates for Kubernetes nodes are in a directory other than
/etc/olcne/certificates/(the default), include the location of the certificates. The--olcne-node-cert-path,--olcne-ca-path, and--olcne-node-key-pathoptions set the location of the certificate files. For example:--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \The Platform API Server is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-api-server.service -
On each Kubernetes node, use the
/etc/olcne/bootstrap-olcne.shscript to configure the Platform Agent to use the certificates. For example:sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component agentIf the certificates for Kubernetes nodes are in a directory other than
/etc/olcne/certificates/(the default), include the location of the certificates. The--olcne-node-cert-path,--olcne-ca-path, and--olcne-node-key-pathoptions set the location of the certificate files. For example:--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \The Platform Agent is configured to use the certificates, and started. You can confirm the service is running using:
systemctl status olcne-agent.service