2 Prerequisites
This chapter describes the prerequisites for the systems to be used in an installation of Oracle Cloud Native Environment. This chapter also discusses how to enable the repositories to install the Oracle Cloud Native Environment packages.
Enabling Access to the Software Packages
This section contains information on setting up the locations for the OS on which you want to install the Oracle Cloud Native Environment software packages.
Oracle Linux 9
The Oracle Cloud Native Environment packages for Oracle Linux 9 are
available on the Oracle Linux yum server in the ol9_olcne18
repository, or on the Unbreakable Linux
Network (ULN) in the ol9_x86_64_olcne18
channel.
However, dependencies exist across other repositories and channels, and these must also be
enabled on each system where Oracle Cloud Native Environment is installed.
Attention:
Ensure the ol9_developer
and ol9_developer_EPEL
yum
repositories or ULN channels aren't enabled, and any software from these repositories or
channels isn't installed on the systems where Kubernetes runs. Even if you follow the
instructions in this document, you might render the platform unstable if these repositories
or channels are enabled or software from these channels or repositories is installed on the
systems.
Enabling Repositories with the Oracle Linux Yum Server
If you're using the Oracle Linux yum server for system updates, enable the required yum repositories.
To enable the yum repositories:
-
Install the
oracle-olcne-release-el9
release package to install the Oracle Cloud Native Environment yum repository configuration.sudo dnf install oracle-olcne-release-el9
-
Set up the repositories for the release you want to install.
Enable the following yum repositories:
-
ol9_olcne18
-
ol9_addons
-
ol9_baseos_latest
-
ol9_appstream
-
ol9_UEKR7
(if hosts are running UEK R7)
Use the
dnf config-manager
tool to enable the yum repositories. For hosts running UEK R7:sudo dnf config-manager --enable ol9_olcne18 ol9_addons ol9_baseos_latest ol9_appstream ol9_UEKR7
For hosts running RHCK:
sudo dnf config-manager --enable ol9_olcne18 ol9_addons ol9_baseos_latest ol9_appstream
-
-
Disable the yum repositories for previous Oracle Cloud Native Environment releases.
sudo dnf config-manager --disable ol9_olcne17
-
Disable any developer yum repositories. To list the developer repositories that need to be disabled, use the
dnf repolist
command:sudo dnf repolist --enabled | grep developer
Disable the repositories returned using the
dnf config-manager
tool. For example:sudo dnf config-manager --disable ol9_developer
Enabling Channels with ULN
If you're registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.
To subscribe to the ULN channels:
-
Sign in to https://linux.oracle.com.
-
On the Systems tab, click the link named for the system in the list of registered machines.
-
On the System Details page, click Manage Subscriptions.
-
On the System Summary page, select each required channel from the list of available channels and click the arrow to move the channel to the list of subscribed channels.
Subscribe the system to the following channels:
-
ol9_x86_64_olcne18
-
ol9_x86_64_addons
-
ol9_x86_64_baseos_latest
-
ol9_x86_64_appstream
-
ol9_x86_64_UEKR7
(if hosts are running UEK R7)
Ensure the systems aren't subscribed to the
ol9_x86_64_olcne17
orol9_x86_64_developer
channels. -
Oracle Linux 8
The Oracle Cloud Native Environment packages for Oracle Linux 8 are
available on the Oracle Linux yum server in the ol8_olcne18
repository, or on the Unbreakable Linux
Network (ULN) in the ol8_x86_64_olcne18
channel.
However, dependencies exist across other repositories and channels, and these must also be
enabled on each system where Oracle Cloud Native Environment is installed.
Attention:
Ensure the ol8_developer
or ol8_developer_EPEL
yum
repositories or ULN channels aren't enabled, and any software from these repositories or
channels isn't installed on the systems where Kubernetes runs. Even if you follow the
instructions in this document, you might render the platform unstable if these repositories
or channels are enabled or software from these channels or repositories is installed on the
systems.
Enabling Repositories with the Oracle Linux Yum Server
If you're using the Oracle Linux yum server for system updates, enable the required yum repositories.
To enable the yum repositories:
-
Install the
oracle-olcne-release-el8
release package to install the Oracle Cloud Native Environment yum repository configuration.sudo dnf install oracle-olcne-release-el8
-
Set up the repositories for the release you want to install.
Enable the following yum repositories:
-
ol8_olcne18
-
ol8_addons
-
ol8_baseos_latest
-
ol8_appstream
-
ol8_kvm_appstream
-
ol8_UEKR7
(if hosts are running UEK R7) -
ol8_UEKR6
(if hosts are running UEK R6)
Use the
dnf config-manager
tool to enable the yum repositories. For hosts running UEK R7:sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
For hosts running UEK R6:
sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR6
For hosts running RHCK:
sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream
-
-
Disable the yum repositories for previous Oracle Cloud Native Environment releases.
sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12
-
Disable any developer yum repositories. To list the developer repositories that need to be disabled, use the
dnf repolist
command:sudo dnf repolist --enabled | grep developer
Disable the repositories returned using the
dnf config-manager
tool. For example:sudo dnf config-manager --disable ol8_developer
Enabling Channels with ULN
If you're registered to use ULN, use the ULN web interface to subscribe the system to the appropriate channels.
To subscribe to the ULN channels:
-
Sign in to https://linux.oracle.com.
-
On the Systems tab, click the link named for the system in the list of registered machines.
-
On the System Details page, click Manage Subscriptions.
-
On the System Summary page, select each required channel from the list of available channels and click the arrow to move the channel to the list of subscribed channels.
Subscribe the system to the following channels:
-
ol8_x86_64_olcne18
-
ol8_x86_64_addons
-
ol8_x86_64_baseos_latest
-
ol8_x86_64_appstream
-
ol8_x86_64_kvm_appstream
-
ol8_x86_64_UEKR7
(if hosts are running UEK R7) -
ol8_x86_64_UEKR6
(if hosts are running UEK R6)
Ensure the systems aren't subscribed to the following channels:
-
ol8_x86_64_developer
-
ol8_x86_64_olcne17
-
ol8_x86_64_olcne16
-
ol8_x86_64_olcne15
-
ol8_x86_64_olcne14
-
ol8_x86_64_olcne13
-
ol8_x86_64_olcne12
-
Accessing the Oracle Container Registry
The container images that are deployed by the Platform CLI are hosted on the Oracle Container Registry. For more information about the Oracle Container Registry, see the Oracle® Linux: Oracle Container Runtime for Docker User's Guide.
For a deployment to use the Oracle Container Registry, each node within the environment must be provisioned with direct access to the Internet.
You can optionally use an Oracle Container Registry mirror, or create a private registry mirror within the network.
When you create a Kubernetes module you must specify
the registry from which to pull the container images. This is set
using the --container-registry
option of the
olcnectl module create
command. If you use the
Oracle Container Registry the container registry must be set to:
container-registry.oracle.com/olcne
If you use a private registry that mirrors the Oracle Cloud Native Environment container images on the Oracle Container Registry, ensure you set the container registry to the domain name and port of the private registry, for example:
myregistry.example.com:5000/olcne
When you set the container registry to use during an installation,
it becomes the default registry from which to pull images during
updates and upgrades of the Kubernetes module. You
can set a new default value during an update or upgrade using the
--container-registry
option.
Using an Oracle Container Registry Mirror
The Oracle Container Registry has many mirror servers around the world. You can use a registry mirror in a global region to improve download performance of container images. While the Oracle Container Registry mirrors are hosted on Oracle Cloud Infrastructure, they're also accessible external to Oracle Cloud Infrastructure. Using a mirror that's closest to a geographical location results in faster download speeds.
To use an Oracle Container Registry mirror to pull images, use the format:
container-registry-region-key.oracle.com/olcne
For example, to use the Oracle Container Registry mirror in the US East (Ashburn) region,
which has a region key of IAD
, the registry is set (using the using the
--container-registry
option) to:
container-registry-iad.oracle.com/olcne
For more information on Oracle Container Registry mirrors and finding the region key for a mirror in a location, see the Oracle Cloud Infrastructure documentation.
Using a Private Registry
Sometimes, nodes within an environment might not be provisioned with direct access to the Internet. In these cases, you can use a private registry that mirrors the Oracle Cloud Native Environment container images on the Oracle Container Registry. Each node requires direct access to the mirror registry host in this scenario.
You can use an existing container registry in a network, or create a private registry using Podman on an Oracle Linux 8 host. If you use an existing private container registry, skip the first step in the following procedure that creates a registry.
To create a private registry:
-
Select an Oracle Linux 8 host to use for the Oracle Container Registry mirror service. The mirror host must have access to the Internet and able to pull images directly from the Oracle Container Registry, or alternately, have access to the correct image files stored locally. Ideally, the host isn't a node within an Oracle Cloud Native Environment, but is accessible to all nodes that are part of the environment.
On the mirror host, install Podman, and set up a private registry, following the instructions in the Setting up a Local Container Registry section in Oracle® Linux: Podman User's Guide.
-
On the mirror host, enable access to the Oracle Cloud Native Environment software packages. For information on enabling access to the packages, see Enabling Access to the Software Packages.
-
Install the
olcne-utils
package so you have access to the registry mirroring utility.sudo dnf install olcne-utils
If you're using an existing container registry in the network that's running on Oracle Linux 7, use
yum
instead ofdnf
to installolcne-utils
. -
Copy the required container images from the Oracle Container Registry to the private registry using the
registry-image-helper.sh
script with the required options:registry-image-helper.sh --to host.example.com:5000/olcne
Where host.example.com:5000 is the resolvable domain name and port on which the private registry is available.
You can optionally use the
--from
option to specify a different registry from which to pull the images. For example, to pull the images from an Oracle Container Registry mirror:registry-image-helper.sh \ --from container-registry-iad.oracle.com/olcne \ --to host.example.com:5000/olcne
If the host where you're running the script doesn't have access to the Internet, you can replace the
--from
option with the--local
option to load the container images directly from a local directory. The local directory which contains the images can be either:-
/usr/local/share/kubeadm/
-
/usr/local/share/olcne/
The image files must be archives in TAR format. All TAR files in the directory are loaded into the private registry when the script is run with the
--local
option.You can use the
--version
option to specify the Kubernetes version you want to mirror. If not specified, the latest release is used. The available versions you can pull are listed in Release Notes. -
Setting up the OS
The following sections describe the requirements that must be met to install and configure Oracle Cloud Native Environment on Oracle Linux systems.
Setting up a Network Time Service
As a clustering environment, Oracle Cloud Native Environment requires that the system time is synchronized across each Kubernetes control plane and worker node within the cluster. Typically, this can be achieved by installing and configuring a Network Time Protocol (NTP) daemon on each node. We recommend installing and setting up the chronyd daemon for this purpose.
The chronyd
service is enabled and started by default on Oracle Linux
systems.
Systems running on Oracle Cloud Infrastructure are configured to use the
chronyd
time service by default, so you don't need to add or configure NTP
if you're installing into an Oracle Cloud Infrastructure environment.
Disabling Swap
You must disable swap on the Kubernetes control plane and worker nodes. To disable swap, enter:
sudo swapoff -a
To make this permanent over reboots, edit the /etc/fstab
file to remove or
comment out any swap disks. For example, you can consider using commands similar to those
shown in the following steps:
-
Check contents of the
/etc/fstab
file before any change:sudo cat /etc/fstab
-
Make a backup of
/etc/fstab
.sudo cp /etc/fstab /etc/fstab_copy
-
Comment out swap disks from the
/etc/fstab
file:sudo sed -i '/\bswap\b/s/^/#/' /etc/fstab
-
Check contents of
/etc/fstab
after the change:sudo cat /etc/fstab
Configuring Access for the Platform Agent
During the installation of the required packages, an olcne
user is
created for the Platform Agent on each of the Kubernetes nodes.
You must ensure the olcne
user can do the following on each Kubernetes
node:
-
Use
sudo
without being required to log in to a real terminal session (tty
). -
Use
sudo
to run scripts in/etc/olcne/scripts
without supplying a password for authentication.
Setting up the Network
This section contains information about the networking requirements for Oracle Cloud Native Environment nodes.
The following table shows the network ports used by the services in a deployment of Kubernetes in an environment.
From Node Type | To Node Type | Port | Protocol | Reason |
---|---|---|---|---|
Worker |
Operator |
8091 |
TCP(6) |
Platform API Server |
Control plane |
Operator |
8091 |
TCP(6) |
Platform API Server |
Control plane |
Control plane |
2379-2380 |
TCP(6) |
Kubernetes etcd (highly available clusters) |
Operator |
Control plane |
6443 |
TCP(6) |
Kubernetes API server |
Worker |
Control plane |
6443 |
TCP(6) |
Kubernetes API server |
Control plane |
Control plane |
6443 |
TCP(6) |
Kubernetes API server |
Control plane |
Control plane |
6444 |
TCP(6) |
Alternate Kubernetes API server (highly available clusters) |
Operator |
Control plane |
8090 |
TCP(6) |
Platform Agent |
Control plane |
Control plane |
10250 10251 10252 10255 |
TCP(6) TCP(6) TCP(6) TCP(6) |
Kubernetes
Kubernetes
Kubernetes
Kubernetes |
Control plane |
Control plane |
8472 |
UDP(11) |
Flannel |
Control plane |
Worker |
8472 |
UDP(11) |
Flannel |
Worker |
Control plane |
8472 |
UDP(11) |
Flannel |
Worker |
Worker |
8472 |
UDP(11) |
Flannel |
Control plane |
Control plane |
N/A |
VRRP(112) |
Keepalived for Kubernetes API server (highly available clusters) |
Operator |
Worker |
8090 |
TCP(6) |
Platform Agent |
Control plane |
Worker |
10250 10255 |
TCP(6) TCP(6) |
Kubernetes
Kubernetes |
The following sections show you how to set up the network on each node to enable the communication between nodes in an environment.
Important:
In addition to opening network ports on the operator and Kubernetes nodes, if you're using an external firewall (hardware or software based), ensure the ports in the previous table are open on the external firewall before you perform an installation.
Setting up the Firewall Rules
Oracle Linux installs and enables firewalld
, by default. You can install
Oracle Cloud Native Environment with firewalld
enabled, or
you can disable it and use another firewall solution. This sections shows you how to set up
the firewall rules to enable firewalld
.
Important:
Calico requires thefirewalld
service to be disabled. If you're installing Calico as the Kubernetes CNI for pods, you don't
need to configure the networking ports as shown in this section. See Calico Module for information on disabling
firewalld
and how to install Calico.
If you want install with firewalld
enabled, the Platform CLI notifies you of
any rules that you need to add during the deployment of the Kubernetes module. The Platform
CLI also provides the commands to run to change the firewall configuration to meet the
requirements.
Ensure that all required ports are open. The ports required for a Kubernetes deployment are:
-
2379/tcp: Kubernetes
etcd
server client API (on control plane nodes in highly available clusters) -
2380/tcp: Kubernetes
etcd
server client API (on control plane nodes in highly available clusters) -
6443/tcp: Kubernetes API server (control plane nodes)
-
8090/tcp: Platform Agent (control plane and worker nodes)
-
8091/tcp: Platform API Server (operator node)
-
8472/udp: Flannel overlay network, VxLAN backend (control plane and worker nodes)
-
10250/tcp: Kubernetes
kubelet
API server (control plane and worker nodes) -
10251/tcp: Kubernetes
kube-scheduler
(on control plane nodes in highly available clusters) -
10252/tcp: Kubernetes
kube-controller-manager
(on control plane nodes in highly available clusters) -
10255/tcp: Kubernetes
kubelet
API server for read-only access with no authentication (control plane and worker nodes)
The commands to open the ports and to set up the firewall rules are provided.
Non-HA Cluster Firewall Rules
For a cluster with a single control plane node, the following ports are required to be open in the firewall.
Operator Node
On the operator node, run:
sudo firewall-cmd --add-port=8091/tcp --permanent
Restart the firewall for these rules to take effect:
sudo systemctl restart firewalld.service
Worker Nodes
On the Kubernetes worker nodes run:
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent
Restart the firewall for these rules to take effect:
sudo systemctl restart firewalld.service
Control Plane Nodes
On the Kubernetes control plane nodes run:
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent sudo firewall-cmd --add-port=6443/tcp --permanent
Restart the firewall for these rules to take effect:
sudo systemctl restart firewalld.service
Highly Available Cluster Firewall Rules
For a highly available cluster, open all the firewall ports as described in Non-HA Cluster Firewall Rules, along with the following extra ports on the control plane nodes.
On the Kubernetes control plane nodes run:
sudo firewall-cmd --add-port=10251/tcp --permanent sudo firewall-cmd --add-port=10252/tcp --permanent sudo firewall-cmd --add-port=2379/tcp --permanent sudo firewall-cmd --add-port=2380/tcp --permanent
Restart the firewall for these rules to take effect:
sudo systemctl restart firewalld.service
Setting up Other Network Options
This section contains information on other network related configuration that affects an Oracle Cloud Native Environment deployment. You might not need to make changes from this section, but they're provided to help you understand any issues you might meet related to network configuration.
Internet Access
The Platform CLI checks it can access the container registry, and possibly other Internet resources, to pull any required container images. Unless you intend to set up a local registry mirror for container images, the systems where you intend to install Oracle Cloud Native Environment must either have direct internet access, or must be configured to use a proxy.
nftables Rule on Oracle Linux 9
If you're using Oracle Linux 9 hosts and you have firewalld enabled, you must add an nftables rule on the Kubernetes nodes.
If you don't need to persist the rule over reboots, set up the rule on each Kubernetes node using:
sudo nft add rule inet firewalld filter_FORWARD_POLICIES_post accept
If you do need to persist the rule over host reboots (recommended), set up a rule file and create a systemd service that's enabled. On each Kubernetes node, run the following.
-
Create a file named
/etc/nftables/forward-policies.nft
which contains the rule by entering:sudo sh -c "cat > /etc/nftables/forward-policies.nft << EOF flush chain inet firewalld filter_FORWARD_POLICIES_post table inet firewalld { chain filter_FORWARD_POLICIES_post { accept } } EOF"
-
Create a file named
/etc/systemd/system/forward-policies.service
which contains the systemd service by entering:sudo sh -c "cat > /etc/systemd/system/forward-policies.service << EOF [Unit] Description=Idempotent nftables rules for forward-policies PartOf=firewalld.service [Service] ExecStart=/sbin/nft -f /etc/nftables/forward-policies.nft ExecReload=/sbin/nft -f /etc/nftables/forward-policies.nft Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target EOF"
-
Enable and restart the rule service:
sudo systemctl enable forward-policies.service sudo systemctl restart forward-policies.service
Flannel Network
The Platform CLI configures a Flannel network as the network fabric used for communications between Kubernetes pods. This overlay network uses VxLANs for network connectivity. For more information on Flannel, see the upstream documentation.
By default, the Platform CLI creates a network in the 10.244.0.0/16
range
to host this network. The Platform CLI provides an option to set the network range to another
range, if required, during installation. Systems in an Oracle Cloud Native Environment deployment must not have any network devices
configured for this reserved IP range.
br_netfilter Module
The Platform CLI checks whether the br_netfilter
module is loaded and exits
if it's not available. This module is required to enable transparent masquerading and for
Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the
cluster. If you need to check whether it's loaded, run:
sudo lsmod|grep br_netfilter
If you see the output similar to the following, the br_netfilter
module is
loaded.
br_netfilter 24576 0
bridge 155648 2 br_netfilter,ebtable_broute
Kernel modules are loaded as they're needed, and it's unlikely that you need to load this module manually. You can load the module manually and add it as a permanent module by running:
sudo modprobe br_netfilter
sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
Bridge Tunable Parameters
Kubernetes requires that packets traversing a network bridge are
processed for filtering and for port forwarding. To achieve
this, tunable parameters in the kernel bridge module are
automatically set when the kubeadm
package
is installed and a sysctl file is created at
/etc/sysctl.d/k8s.conf
that contains the
following lines:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
If you change this file, or create anything similar yourself, run the following command to load the bridge tunable parameters:
sudo /sbin/sysctl -p /etc/sysctl.d/k8s.conf
Network Address Translation
Network Address Translation (NAT) is sometimes required when one or more Kubernetes worker nodes in a cluster are behind a NAT gateway. For example, you might want to have a control plane node in a secure company network while having other worker nodes in a publicly accessible demilitarized zone which is less secure. The control plane node would access the worker nodes through the worker node's NAT gateway. Or you might have a worker node in a legacy network that you want to use in the cluster that's primarily on a newer network. The NAT gateway, in these cases, translates requests for an IP address accessible to the Kubernetes cluster into the IP address on the subnet behind the NAT gateway.
Note:
Only worker nodes can be behind a NAT. Control plane nodes can't be behind a NAT.
Regardless of what switches or network equipment you use to set up the NAT gateway, you must configure the following for a node behind a NAT gateway:
-
The node's interface behind the NAT gateway must have an public IP address using the
/32
subnet mask that's reachable by the Kubernetes cluster. The/32
subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address. -
The node's interface must also include a private IP address behind the NAT gateway that the switch uses NAT tables to match the public IP address to.
For example, you can use the following command to add the
reachable IP address on the ens5
interface:
sudo ip addr add 192.168.64.6/32 dev ens5
You can then use the following command to add the private IP address on the same interface:
sudo ip addr add 192.168.192.2/18 dev ens5
Setting FIPS Mode
You can optionally configure Oracle Cloud Native Environment operator, control plane, and worker hosts to run in Federal Information Processing Standards (FIPS) mode as described in Oracle Linux 9: Installing and Configuring FIPS Mode or Oracle Linux 8: Enhancing System Security.
Oracle Cloud Native Environment uses the cryptographic binaries of OpenSSL from Oracle Linux when the host runs in FIPS mode.
Setting Up SSH Key-based Authentication
Set up SSH key-based authentication for the user that's to be used to
run the Platform CLI (olcnectl
) installation commands to enable login from
the operator node to each Kubernetes node and to the Platform API Server node.
The following steps show one method of setting up SSH key-based authentication.
-
Generate the private and public key pair. On the operator node, run
ssh-keygen
as the user that you use to runolcnectl
commands. Don't create a passphrase for the key (press<Enter>
when prompted for a passphrase). For example:ssh-keygen
Output similar to the following is displayed:
Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa):<Enter> Enter passphrase (empty for no passphrase): <Enter> Enter same passphrase again: <Enter> Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. ...
-
Verify the location of the private and public key pair. Verify the private and public key pair have been created at the location reported in the
ssh-keygen
command output:ls -l /home/user/.ssh/
Output similar to the following is displayed:
... -rw-------. 1 user user 2643 Jan 10 14:55 id_rsa -rw-r--r--. 1 user user 600 Jan 10 14:55 id_rsa.pub ...
The public key is indicated by the file with the “
.pub
” extension. -
Set up the public key on the target nodes. Add the contents of the public key to the
$HOME/.ssh/authorized_keys
file on each target node for the user for which the key-based SSH is being set up.On the operator node, run the
ssh-copy-id
command. The syntax is:ssh-copy-id user@host
When prompted you enter the user’s password for the host. After the command successfully completes, the public key’s contents have been added to the copy of the user’s
$HOME/.ssh/authorized_keys
file on the remote host.The following example shows how command
ssh-copy-id
can be used to add the public key to theauthorized_keys
file for user on host192.0.2.255
:ssh-copy-id user@192.0.2.255
-
Verify the user has SSH key-based access from the operator node. On the operator node, use
ssh
to connect to each of the other nodes and confirm login succeeds without being prompted for a password.For example, confirm key-based SSH access by running the
ssh
command on the operator node as follows:ssh user@192.0.2.255
For more information on setting up SSH key-based authentication, see Oracle Linux: Connecting to Remote Systems With OpenSSH.