4 Platform CLI Commands
This chapter contains the syntax for each olcnectl
command option,
including usage, and examples.
Certificates Copy
Copies the generated CA Certificates and private keys to the Kubernetes nodes.
Copies and installs the CA Certificates and private keys for a set of nodes from an existing set. The files must be saved within a specific directory structure.
The certificate authority bundle must be saved to: <cert-dir>/ca.cert
.
The certificate for each node must be saved to:
<cert-dir>/<node-address>/node.cert
.
The node key must be saved to:
<cert-dir>/<node-address>/node.key
.
Syntax
olcnectl certificates copy
[--cert-dir certificate-directory]
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where:
-
--cert-dir certificate-directory
-
The directory to read or write key material generated by this utility. The default is
<CURRENT_DIR>/certificates
. -
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Certificates Distribute
Generates and distributes CA Certificates and keys to the nodes.
Syntax
olcnectl certificates distribute
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where:
-
--byo-ca-cert certificate-path
-
The path to an existing public CA Certificate.
-
--byo-ca-key key-path
-
The path to an existing private key.
-
--cert-dir certificate-directory
-
The directory to read or write key material generated by this utility. The default is
<CURRENT_DIR>/certificates
. -
--cert-request-common-name common_name
-
The Certificate Common Name suffix. The default is
example.com
. -
--cert-request-country country
-
The two letter country code of the company, for example,
US
for the United States,GB
for the United Kingdom andCN
for China. The default isUS
. -
--cert-request-locality locality
-
The name of the city where the company is located. The default is
Redwood City
. -
--cert-request-organization organization
-
The name of the company. The default is
OLCNE
. -
--cert-request-organization-unit organization-unit
-
The name of the department within the company. The default is
OLCNE
. -
--cert-request-state state
-
The name of the state or province where the company is located. The default is
California
. -
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
--one-cert
-
Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option isn't set.
-
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Examples
Example 4-2 Distribute certificates to nodes
This example distributes the certificates to the nodes listed.
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com
Example 4-3 Distribute certificates for nodes with Certificate and SSH login information
This example distributes the certificates for the nodes listed, sets the Certificate, and SSH login information.
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--cert-request-common-name cloud.example.com \
--cert-request-country US \
--cert-request-locality "My Town" \
--cert-request-organization "My Company" \
--cert-request-organization-unit "My Company Unit" \
--cert-request-state "My State" \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle
Example 4-4 Distribute certificates for nodes using an existing CA Certificate and private key
This example distributes the certificate for the nodes listed using an existing CA Certificate and private key. This is useful for generating and copying the certificate information to nodes you want to add to an existing Kubernetes cluster.
olcnectl certificates distribute \
--nodes worker3.example.com,worker4.example.com \
--byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
--byo-ca-key /etc/olcne/configs/certificates/production/ca.key
Example 4-5 Distribute certificates for nodes using an existing CA Certificate and private key
This example distributes the certificate for the nodes listed using an existing CA Certificate and private key. This is useful for generating and copying the certificate information to nodes you want to add to an existing Kubernetes cluster.
olcnectl certificates distribute \
--nodes worker3.example.com,worker4.example.com \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key
Example 4-6 Distribute certificates for nodes using an existing CA Certificate and private key and SSH login information
This example distributes the certificate for the nodes listed using an existing CA Certificate and private key.
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--byo-ca-cert /etc/olcne/certificates/ca.cert \
--byo-ca-key /etc/olcne/certificates/ca.key
Example 4-7 Distribute certificates for nodes using an existing CA Certificate and private key and SSH login information
This example distributes the certificate for the nodes listed using an existing CA Certificate and private key.
olcnectl certificates distribute \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key
Certificates Generate
Generates CA Certificates for the nodes.
Creates the CA Certificates and keys required to authenticate the Platform CLI, Platform API Server, and Platform Agent for a set of hosts.
If a Certificate Authority is created, its key material is placed in the directory
specified with the --cert-dir
option. The CA Certificate is written to
the file ca.cert
in that directory, and the private key is written to
ca.key
.
Syntax
olcnectl certificates generate
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[{-h|--help}]
{-n|--nodes} nodes
[--one-cert]
Where:
-
--byo-ca-cert certificate-path
-
The path to an existing public CA Certificate.
-
--byo-ca-key key-path
-
The path to an existing private key.
-
--cert-dir certificate-directory
-
The directory to read or write key material generated by this utility. The default is
<CURRENT_DIR>/certificates
. -
--cert-request-common-name common_name
-
The Certificate Common Name suffix. The default is
example.com
. -
--cert-request-country country
-
The two letter country code of the company, for example,
US
for the United States,GB
for the United Kingdom andCN
for China. The default isUS
. -
--cert-request-locality locality
-
The name of the city where the company is located. The default is
Redwood City
. -
--cert-request-organization organization
-
The name of the company. The default is
OLCNE
. -
--cert-request-organization-unit organization-unit
-
The name of the department within the company. The default is
OLCNE
. -
--cert-request-state state
-
The name of the state or province where the company is located. The default is
California
. -
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
--one-cert
-
Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option isn't set.
Examples
Example 4-8 Generate certificates for nodes
This example generates the certificates for the nodes listed.
olcnectl certificates generate \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com
Example 4-9 Generate certificates for nodes with certificate information
This example generates the certificates for the nodes listed, and sets the certificate information.
olcnectl certificates generate \
--nodes operator.example.com,control1.example.com,worker1.example.com,worker2.example.com \
--cert-request-common-name cloud.example.com \
--cert-request-country US \
--cert-request-locality "My Town" \
--cert-request-organization "My Company" \
--cert-request-organization-unit "My Company Unit" \
--cert-request-state "My State"
Example 4-10 Generate certificates for the Kubernetes ExternalIPs service with existing key information
This example generates the certificates for the Kubernetes ExternalIPs service using existing CA certificate and private key.
olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir /etc/olcne/certificates/restrict_external_ip/ \
--byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \
--byo-ca-key /etc/olcne/configs/certificates/production/ca.key \
--one-cert
Example 4-11 Generate certificates for the Kubernetes ExternalIPs service with existing key information
This example generates the certificates for the Kubernetes ExternalIPs service using existing CA certificate and private key.
olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir $HOME/certificates/restrict_external_ip/ \
--byo-ca-cert $HOME/certificates/ca/ca.cert \
--byo-ca-key $HOME/certificates/ca/ca.key \
--one-cert
Example 4-12 Generate certificates for the Kubernetes ExternalIPs service
This example generates the certificates for the Kubernetes ExternalIPs service using a private CA certificate and private key that's generated at the time.
olcnectl certificates generate \
--nodes externalip-validation-webhook-service.externalip-validation-system.svc,\
externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \
--cert-dir /etc/olcne/certificates/restrict_external_ip/ \
--cert-request-organization-unit "My Company Unit" \
--cert-request-organization "My Company" \
--cert-request-locality "My Town" \
--cert-request-state "My State" \
--cert-request-country US \
--cert-request-common-name cloud.example.com \
--one-cert
Completion
Generates a command line completion (also known as tab completion) script.
Syntax
olcnectl completion
shell
[--no-descriptions]
[{-h|--help}]
Where:
-
shell
-
The shell type. The options are:
-
bash
: Generates the command line completion script for the Bash shell. -
fish
: Generates the command line completion script for the fish shell. -
powershell
: Generates the command line completion script for PowerShell shell. -
zsh
: Generates the command line completion script for the Zsh shell.
-
-
--no-descriptions
-
Disables completion descriptions in the generated script.
-
{-h|--help}
-
Lists information about the command and the available options.
Environment Create
Creates an empty environment.
The first step to deploying Oracle Cloud Native Environment is to create an empty environment. You can create an environment using certificates provided by Vault, or using existing certificates on the nodes.
Syntax
olcnectl environment create
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is:
-
{-a|--api-server} api_server_address:8091
-
The Platform API Server for the environment. This is the host running the
olcne-api-server
service in an environment. The value of api_server_address is the IP address or hostname of the Platform API Server. The port number is the port on which theolcne-api-server
service is available. The default port is8091
.If a Platform API Server isn't specified, a local instance is used. If no local instance is set up, it's configured in the
$HOME/.olcne/olcne.conf
file.For more information on setting the Platform API Server see Setting the Platform API Server.
This option maps to the
$OLCNE_API_SERVER_BIN
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
--config-file path
-
The location of a YAML file that contains the configuration information for the environments and modules. The filename extension must be either
yaml
oryml
. When you use this option, any other command line options are ignored, except the--force
option. Only the information contained in the configuration file is used. -
--secret-manager-type {file|vault}
-
The secrets manager type. The options are
file
orvault
. Usefile
for certificates saved on the nodes and usevault
for certificates managed by Vault. -
--update-config
-
Writes the global arguments for an environment to a local configuration file which is used for future calls to the Platform API Server. If this option hasn't been used before, global arguments must be specified for every Platform API Server call.
The global arguments configuration information is saved to
$HOME/.olcne/olcne.conf
on the local host.If you use Vault to generate certificates for nodes, the certificate is saved to
$HOME/.olcne/certificates/environment_name/
on the local host. -
--olcne-ca-path ca_path
-
The path to a predefined Certificate Authority certificate, or the destination of the certificate if using a secrets manager to download the certificate. The default is
/etc/olcne/certificates/ca.cert
, or gathered from the local configuration if the--update-config
option is used.This option maps to the
$OLCNE_SM_CA_PATH
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
--olcne-node-cert-path node_cert_path
-
The path to a predefined certificate, or the a destination if using a secrets manager to download the certificate. The default is
/etc/olcne/certificates/node.cert
, or gathered from the local configuration if the--update-config
option is used.This option maps to the
$OLCNE_SM_CERT_PATH
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
--olcne-node-key-path node_key_path
-
The path to a predefined key, or the destination of the key if using a secrets manager to download the key. The default is
/etc/olcne/certificates/node.key
, or gathered from the local configuration if the--update-config
option is used.This option maps to the
$OLCNE_SM_KEY_PATH
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
--olcne-tls-cipher-suites ciphers
-
The TLS cipher suites to use for Oracle Cloud Native Environment services (the Platform Agent and Platform API Server). Enter one or more in a comma separated list. The options are:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
-
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
-
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
-
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_RSA_WITH_RC4_128_SHA
-
TLS_RSA_WITH_3DES_EDE_CBC_SHA
-
TLS_RSA_WITH_AES_128_CBC_SHA
-
TLS_RSA_WITH_AES_128_CBC_SHA256
-
TLS_RSA_WITH_AES_128_GCM_SHA256
-
TLS_RSA_WITH_AES_256_CBC_SHA
-
TLS_RSA_WITH_AES_256_GCM_SHA384
-
TLS_RSA_WITH_RC4_128_SHA
For example:
--olcne-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
This option maps to the
$OLCNE_TLS_CIPHER_SUITES
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
-
--olcne-tls-max-version version
-
The TLS maximum version for Oracle Cloud Native Environment components. The default is
VersionTLS12
. Options are:-
VersionTLS10
-
VersionTLS11
-
VersionTLS12
-
VersionTLS13
This option maps to the
$OLCNE_TLS_MAX_VERSION
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
-
--olcne-tls-min-version version
-
The TLS minimum version for Oracle Cloud Native Environment components. The default is
VersionTLS12
. Options are:-
VersionTLS10
-
VersionTLS11
-
VersionTLS12
-
VersionTLS13
This option maps to the
$OLCNE_TLS_MIN_VERSION
environment variable. If this environment variable is set it takes precedence over and overrides the Platform CLI setting. -
-
--vault-address vault_address
-
The IP address of the Vault instance. The default is
https://127.0.0.1:8200
, or gathered from the local configuration if the--update-config
option is used. -
--vault-cert-sans vault_cert_sans
-
Subject Alternative Names (SANs) to pass to Vault to generate the Oracle Cloud Native Environment certificate. The default is
127.0.0.1
, or gathered from the local configuration if the--update-config
option is used. -
--vault-token vault_token
-
The Vault authentication token.
Examples
Example 4-14 Creating an environment using Vault
To create an environment named
myenvironment
using certificates generated
from a Vault instance, use the --secret-manager-type
vault
option:
olcnectl environment create \ --api-server 127.0.0.1:8091 \ --environment-name myenvironment \ --secret-manager-type vault \ --vault-token s.3QKNuRoTqLbjXaGBOmO6Psjh \ --vault-address https://192.0.2.20:8200 \ --update-config
Example 4-15 Creating an environment using certificates
To create an environment named
myenvironment
using certificates on the
node's file system, use the --secret-manager-type
file
option:
olcnectl environment create \ --api-server 127.0.0.1:8091 \ --environment-name myenvironment \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/certificates/node.cert \ --olcne-ca-path /etc/olcne/certificates/ca.cert \ --olcne-node-key-path /etc/olcne/certificates/node.key \ --update-config
Environment Delete
Deletes an existing environment.
You must uninstall any modules from an environment before you can delete it.
Syntax
olcnectl environment delete
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Environment Update
Updates or upgrades the Platform Agent on nodes in an existing environment.
Syntax
olcnectl environment update
olcne
{-E|--environment-name} environment_name
[{-N|--name} name]
[{-h|--help}]
[globals]
Where:
-
olcne
-
Specifies to update the Platform Agent on each node in an environment.
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
The Platform Agent is updated on only the nodes used in this module.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-17 Updating Platform Agents
To update the Platform Agent on each node in an environment named
myenvironment
:
olcnectl environment update olcne \ --environment-name myenvironment
Example 4-18 Updating Platform Agents in a module
To update the Platform Agent on each node in a Kubernetes module named
mycluster
in an environment named myenvironment
:
olcnectl environment update olcne \ --environment-name myenvironment \ --name mycluster
Environment Report
Reports summary and detailed information about environments.
Syntax
olcnectl environment report
[{-E|--environment-name} environment_name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
--children
-
When added to the command, this option recursively displays the properties for all children of a module instance. The default value is
false
. -
--exclude pattern
-
An RE2 regular expression selecting the properties to exclude from the report. This option might specify more than one property as a comma separated lists.
-
--include pattern
-
An RE2 regular expression selecting the properties to include in the report. This option might specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.
-
--format {yaml|table}
-
To generate reports in YAML or table format, use this option. The default format is
table
. -
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-19 Reporting summary details about an environment
To report a summary about the environment named myenvironment
:
olcnectl environment report \ --environment-name myenvironment
Example 4-20 Reporting details about an environment
To report details about the environment named myenvironment
:
olcnectl environment report \ --environment-name myenvironment \ --children
Module Backup
Backs up a module in an environment.
Syntax
olcnectl module backup
{-E|--environment-name} environment_name
{-N|--name} name
[{-L|--log-level} type]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Create
Adds and configures a module in an environment.
Syntax
olcnectl module create
{-E|--environment-name} environment_name
{-M|--module} module
{-N|--name} name
[{-h|--help}]
[module_args ...]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-M|--module} module
-
The module type to create in an environment. The value of module is the name of a module type. The available module types are:
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where module_args is:
The value of module_args is one or more arguments to configure a module in an environment.
module_args for the kubernetes
module:
-
{-o|--apiserver-advertise-address} IP_address
-
The IP address on which to advertise the Kubernetes API server to members of the Kubernetes cluster. This address must be reachable by the cluster nodes. If no value is provided, the interface on the control plane node is used specified with the
--control-plane-nodes
argument.This option isn't used in a highly available (HA) cluster with many control plane nodes.
Important:
This argument has been deprecated. Use the
--control-plane-nodes
argument instead. -
{-b|--apiserver-bind-port} port
-
The Kubernetes API server bind port. The default is
6443
. -
{-B|--apiserver-bind-port-alt} port
-
The port on which the Kubernetes API server listens when you use a virtual IP address for the load balancer. The default is
6444
. This is optional.When you use a virtual IP address, the Kubernetes API server port is changed from the default of
6443
to6444
. The load balancer listens on port6443
and receives the requests and passes them to the Kubernetes API server. To change the Kubernetes API server port in this situation from6444
, use this option. -
{-e|--apiserver-cert-extra-sans} api_server_sans
-
The Subject Alternative Names (SANs) to use for the Kubernetes API server serving certificate. This value can contain both IP addresses and DNS names.
-
--compact {true|false}
-
Sets whether to allow non-system Kubernetes workloads to run on control plane nodes. The default is
false
.If you set this to
true
, the Platform API Server does not taint the control plane nodes. This allows non-system Kubernetes workloads to be scheduled and run on control plane nodes.Important:
For production environments, this option must be set to
false
(the default). -
{-r|--container-registry} container_registry
-
The container registry that contains the Kubernetes images. Use
container-registry.oracle.com/olcne
to pull the Kubernetes images from the Oracle Container Registry.If you don't provide this value, you're prompted for it by the Platform CLI.
-
{-c|--control-plane-nodes} nodes ...
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes, including the port number for the Platform Agent. For example,
control1.example.com:8090,control2.example.com:8090
.If you don't provide this value, you're prompted for it by the Platform CLI.
-
{-x|--kube-proxy-mode} {userspace|iptables|ipvs}
-
The routing mode for the Kubernetes proxy. The default is
iptables
. The available proxy modes are:-
userspace
: This is an older proxy mode. -
iptables
: This is the fastest proxy mode. This is the default mode. -
ipvs
: This is an experimental mode.
If no value is provided, the default of
iptables
is used. If the system's kernel oriptables
version is insufficient, theuserspace
proxy is used. -
-
{-v|--kube-version} version
-
The version of Kubernetes to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
{-t|--kubeadm-token} token
-
The token to use for establishing bidirectional trust between Kubernetes nodes and control plane nodes. The format is
[a-z0-9]{6}\.[a-z0-9]{16}
, for example,abcdef.0123456789abcdef
. -
--kube-tls-cipher-suites ciphers
-
The TLS cipher suites to use for Kubernetes components. Enter one or more in a comma separated list. The options are:
-
TLS_AES_128_GCM_SHA256
-
TLS_AES_256_GCM_SHA384
-
TLS_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
-
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
-
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
-
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
-
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
-
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
-
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
-
TLS_ECDHE_RSA_WITH_RC4_128_SHA
-
TLS_RSA_WITH_3DES_EDE_CBC_SHA
-
TLS_RSA_WITH_AES_128_CBC_SHA
-
TLS_RSA_WITH_AES_128_CBC_SHA256
-
TLS_RSA_WITH_AES_128_GCM_SHA256
-
TLS_RSA_WITH_AES_256_CBC_SHA
-
TLS_RSA_WITH_AES_256_GCM_SHA384
-
TLS_RSA_WITH_RC4_128_SHA
For example:
--kube-tls-cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-
-
--kube-tls-min-version version
-
The TLS minimum version for Kubernetes components. The default is
VersionTLS12
. Options are:-
VersionTLS10
-
VersionTLS11
-
VersionTLS12
-
VersionTLS13
-
-
{-l|--load-balancer} load_balancer
-
The Kubernetes API server load balancer hostname or IP address, and port. The default port is
6443
. For example,192.0.2.100:6443
. -
{-m|--master-nodes} nodes ...
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes, including the port number for the Platform Agent. For example,
control1.example.com:8090,control2.example.com:8090
.If you don't provide this value, you're prompted for it by the Platform CLI.
Important:
This option is deprecated. Use the--control-plane-nodes
option instead. -
{-g|--nginx-image} container_location
-
The location for an NGINX container image to use in a highly available (HA) cluster with many control plane nodes. This is optional.
You can use this option if you don't provide a load balancer using the
--load-balancer
option. This option might be useful if you're using a mirrored container registry. For example:--nginx-image mirror.example.com:5000/olcne/nginx:1.17.7
By default,
podman
is used to pull the NGINX image that's configured in/usr/libexec/pull_olcne_nginx
. If you set the--nginx-image
option to use another NGINX container image, the location of the image is written to/etc/olcne-nginx/image
, and overrides the default image. -
--node-labels label
-
Important:
This option, and the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi
) that required this option, is deprecated in Release 1.5.The label to add to Kubernetes nodes on Oracle Cloud Infrastructure instances to set the Availability Domain for pods. This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (
oci-csi
). The label format is:failure-domain.beta.kubernetes.io/zone=region-identifier-AD-availability-domain-number
For example:
--node-labels failure-domain.beta.kubernetes.io/zone=US-ASHBURN-AD-1
For a list of the Availability Domains, see the Oracle Cloud Infrastructure documentation.
-
--node-ocids OCIDs
-
Important:
This option, and the Oracle Cloud Infrastructure Container Storage Interface module (oci-csi
) that required this option, is deprecated in Release 1.5.A comma separated list of Kubernetes nodes (both control plane and worker nodes) with their Oracle Cloud Identifiers (OCIDs). This option is used with the Oracle Cloud Infrastructure Container Storage Interface module (
oci-csi
). The format for the list is:FQDN=OCID,...
For example:
--node-ocids control1.example.com=ocid1.instance...,worker1.example.com=ocid1.instance...,worker2.example.com=ocid1.instance...
For information about OCIDs, see the Oracle Cloud Infrastructure documentation.
-
{-p|--pod-cidr} pod_CIDR
-
The Kubernetes pod CIDR. The default is
10.244.0.0/16
. This is the range from which each Kubernetes pod network interface is assigned an IP address. -
{-n|--pod-network} {flannel|calico|none}
-
The network fabric for the Kubernetes cluster. The options are:
flannel
: Installs Flannel. This is the default.calico
: Installs the Calico module. You must disablefirewalld
on all Kubernetes nodes to use this option.none
: Doesn't install any network fabric.
-
{-P|--pod-network-iface} network_interface
-
The name of the network interface on the nodes to use for the Kubernetes data plane network communication. The data plane network is used by the pods running on Kubernetes. If you use regex to set the interface name, the first matching interface returned by the kernel is used. For example:
--pod-network-iface "ens[1-5]|eth5"
-
--selinux {enforcing|permissive}
-
Whether to use SELinux
enforcing
orpermissive
mode.permissive
is the default.Use this option if SELinux is set to
enforcing
on the control plane and worker nodes. SELinux is set toenforcing
mode by default on the OS and is the recommended mode. -
{-s|--service-cidr} service_CIDR
-
The Kubernetes service CIDR. The default is
10.96.0.0/12
. This is the range from which each Kubernetes service is assigned an IP address. -
{-i|--virtual-ip} virtual_ip
-
The virtual IP address for the load balancer. This is optional.
Use this option if you don't specify a load balancer using the
--load-balancer
option. When you specify a virtual IP address, it's used as the primary IP address for control plane nodes. -
{-w|--worker-nodes} nodes ...
-
A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes, including the port number for the Platform Agent. If a worker node is behind a NAT gateway, use the public IP address for the node. The worker node's interface behind the NAT gateway must have an public IP address using the /32 subnet mask that's reachable by the Kubernetes cluster. The /32 subnet restricts the subnet to one IP address, so that all traffic from the Kubernetes cluster flows through this public IP address (for more information about configuring NAT, see Installation). The default port number is
8090
. For example,worker1.example.com:8090,worker2.example.com:8090
.If you don't provide this value, you're prompted for it by the Platform CLI.
-
--restrict-service-externalip {true|false}
-
Sets whether to restrict access to external IP addresses for Kubernetes services. The default is
true
, which restricts access to external IP addresses.This option deploys a Kubernetes service named
externalip-validation-webhook-service
to validateexternalIPs
set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using theexternalIPs
option in thespec
section. -
--restrict-service-externalip-ca-cert path
-
The path to a CA certificate file for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/ca.cert
. -
--restrict-service-externalip-tls-cert path
-
The path to a CA certificate file for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/node.cert
. -
--restrict-service-externalip-tls-key path
-
The path to the private key for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/node.key
. -
--restrict-service-externalip-cidrs allowed_cidrs
-
Enter one or more comma separated CIDR blocks to allow only IP addresses from the specified CIDR blocks. For example,
192.0.2.0/24,198.51.100.0/24
.
module_args for the calico
module:
-
--calico-container-registry container_registry
-
The container image registry to use when deploying the Calico container image. The default is
container-registry.oracle.com/olcne
. -
--calico-helm-module helm_module
-
The name of the
helm
module that the Calico module is to be associated with.Important:
This option is deprecated. Use the--calico-kubernetes-module
option instead. -
--calico-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Calico module is to be associated with. -
--calico-installation-config config-file-path
-
The path and location of a Calico configuration file.
-
--calico-namespace namespace
-
The Kubernetes namespace in which Calico components are deployed. The default is
tigera-operator
. -
--calico-version version
-
The version of Calico to install. The default is the latest version. For information on the latest version number, see Release Notes.
module_args for the multus
module:
-
--multus-config config-file-path
-
The path and location of a Multus configuration file.
-
--multus-container-registry container_registry
-
The container image registry to use when deploying the Multus container image. The default is
container-registry.oracle.com/olcne
. -
--multus-helm-module helm_module
-
The name of the
helm
module that the Multus module is to be associated with.Important:
This option is deprecated. Use the--multus-kubernetes-module
option instead. -
--multus-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Multus module is to be associated with. -
--multus-namespace namespace
-
The Kubernetes namespace in which Multus components are deployed. The default is
kube-system
. -
--multus-version version
-
The version of Multus to install. The default is the latest version. For information on the latest version number, see Release Notes.
module_args for the oci-ccm
module:
-
--oci-ccm-helm-module helm_module
-
The name of the
helm
module that the Oracle Cloud Infrastructure Cloud Controller Manager module is to be associated with.Important:
This option is deprecated. Use the--oci-ccm-kubernetes-module
option instead. -
--oci-ccm-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Oracle Cloud Infrastructure Cloud Controller Manager module is to be associated with. -
--oci-tenancy OCID
-
The OCID for the Oracle Cloud Infrastructure tenancy.
-
--oci-region region_identifier
-
The Oracle Cloud Infrastructure region identifier. The default is
us-ashburn-1
.For a list of the region identifiers, see the Oracle Cloud Infrastructure documentation.
-
--oci-compartment OCID
-
The OCID for the Oracle Cloud Infrastructure compartment.
-
--oci-user OCID
-
The OCID for the Oracle Cloud Infrastructure user.
-
--oci-private-key path
-
The location of the private key for the Oracle Cloud Infrastructure API signing key. The private key must be on the primary control plane node. The default is
/root/.ssh/id_rsa
.Important:
This option is deprecated in Release 1.6.1. From Release 1.6.1 onwards, use the
--oci-private-key-file
option instead. -
--oci-private-key-file path
-
The location of the private key for the Oracle Cloud Infrastructure API signing key.
Important:
The private key must be available on the operator node.
-
--oci-fingerprint fingerprint
-
The fingerprint of the public key for the Oracle Cloud Infrastructure API signing key.
-
--oci-passphrase passphrase
-
The passphrase for the private key for the Oracle Cloud Infrastructure API signing key, if one is set.
-
--oci-vcn OCID
-
The OCID for the Oracle Cloud Infrastructure Virtual Cloud Network on which the Kubernetes cluster is available.
-
--oci-lb-subnet1 OCID
-
The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.
Or, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-subnet2 OCID
-
The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-security-mode {All|Frontend|None}
-
This option sets whether the Oracle Cloud Infrastructure Cloud Controller Manager module is to manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is
None
.For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.
-
--oci-use-instance-principals {true|false}
-
Sets whether to enable an instance to make API calls in Oracle Cloud Infrastructure services. The default is
false
. -
--oci-container-registry container_registry
-
The container image registry to use when deploying the Oracle Cloud Infrastructure cloud provisioner image. The default is an empty string. The Platform API Server decides the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that's to be installed. Or, you can use a private registry.
-
--ccm-container-registry container_registry
-
The container image registry to use when deploying the Oracle Cloud Infrastructure Cloud Controller Manager component images. The default is an empty string. The Platform API Server decides the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that's to be installed. Or, you can use a private registry.
-
--oci-ccm-version version
-
The version of Oracle Cloud Infrastructure Cloud Controller Manager to install. The default is the latest version. For information on the latest version number, see Release Notes.
module_args for the metallb
module:
-
--metallb-helm-module helm_module
-
The name of the
helm
module that MetalLB is to be associated with.Important:
This option is deprecated. Use the--metallb-kubernetes-module
option instead. -
--metallb-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the MetalLB module is to be associated with. -
--metallb-config path
-
The location of the file that contains the configuration information for MetalLB. This file must be on operator node.
-
--metallb-namespace namespace
-
The Kubernetes namespace in which to install MetalLB. The default namespace is
.metallb-system
-
--metallb-version version
-
The version of MetalLB to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--metallb-container-registry container_registry
-
The container image registry and optional tag to use when installing MetalLB. The default is
container-registry.oracle.com/olcne
.
module_args for the rook
module:
-
--rook-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Rook module is to be associated with. -
--rook-config path
-
The location of the file that contains the configuration information for Rook. This file must be on operator node.
-
--rook-namespace namespace
-
The Kubernetes namespace in which to install the Rook module. The default is
rook
. -
--rook-version version
-
The version of Rook to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--rook-container-registry container_registry
-
The container image registry to use when deploying the Rook module. The default is
container-registry.oracle.com/olcne
.
module_args for the kubevirt
module:
-
--kubevirt-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the KubeVirt module is to be associated with. -
--kubevirt-config path
-
The location of the file that contains the configuration information for KubeVirt, including the
kubevirt.io/v1/KubeVirt
object. This file must be on operator node. -
--kubevirt-namespace namespace
-
The Kubernetes namespace in which to install the KubeVirt module. The default is
kubevirt
. -
--kubevirt-version version
-
The version of KubeVirt to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--kubevirt-container-registry container_registry
-
The container image registry to use when deploying the KubeVirt module. The default is
container-registry.oracle.com/olcne
.
module_args for the operator-lifecycle-manager
module:
-
--olm-helm-module helm_module
-
The name of the
helm
module that Operator Lifecycle Manager is to be associated with.Important:
This option is deprecated. Use the--olm-kubernetes-module
option instead. -
--olm-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Operator Lifecycle Manager module is to be associated with. -
--olm-version version
-
The version of Operator Lifecycle Manager to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--olm-container-registry container_registry
-
The container image registry to use when deploying the Operator Lifecycle Manager. The default is
container-registry.oracle.com/olcne
. -
--olm-enable-operatorhub {true|false}
-
Sets whether to enable the Operator Lifecycle Manager to use the OperatorHub registry as a catalog source.
The default is
true
.
module_args for the ingress-nginx
module:
-
--ingress-nginx-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the NGINX Ingress Controller module is to be associated with. -
--ingress-nginx-version version
-
The version of the NGINX Ingress Controller to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--ingress-nginx-container-registry container_registry
-
The container image registry to use when deploying the NGINX Ingress Controller. The default is
container-registry.oracle.com/olcne
. -
--ingress-nginx-namespace namespace
-
The Kubernetes namespace in which to install the NGINX Ingress Controller module. The default is
ingress-nginx
. -
--ingress-controller-service-annotations {annotation,...}
-
A comma separated list of annotations to be added to the external controller service to provision an Oracle Cloud Infrastructure load balancer.
For the full list of annotations you can include, see the upstream documentation at:
https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md
module_args for the istio
module:
-
--istio-helm-module helm_module
-
The name of the
helm
module that Istio is to be associated with.Important:
This option is deprecated. Use the--istio-kubernetes-module
option instead. -
--istio-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Istio module is to be assoingress-ngciated with. -
--istio-version version
-
The version of Istio to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--istio-container-registry container_registry
-
The container image registry to use when deploying Istio. The default is
container-registry.oracle.com/olcne
. -
--istio-enable-grafana {true|false}
-
Sets whether to deploy the Grafana module to visualize the metrics stored in Prometheus for Istio. The default is
true
. -
--istio-enable-prometheus {true|false}
-
Sets whether to deploy the Prometheus module to store the metrics for Istio. The default is
true
. -
--istio-mutual-tls {true|false}
-
Sets whether to enable Mutual Transport Layer Security (mTLS) for communication between the control plane pods for Istio, and for any pods deployed into the Istio service mesh.
The default is
true
.Important:
We recommended this value isn't set to
false
, especially in production environments. -
--istio-parent name
-
The name of the
istio
module to use with a custom profile. When used with the--istio-profile
option, lets many instances of theistio
module to attach Istio platform components to a single Istio control plane. When this option is set, the default Istio profile is replaced with the a mostly empty profile. The only contents of the profile are the container image hub location, and tags that correspond to the installed version of theistio
module. -
--istio-profile path
-
The path to the file that contains the
spec
section of anIstioOperator
resource from theinstall.istio.io/v1alpha1
Kubernetes API. The values in this resource are laid over top of, and override, the default profile for Istio.For information on the
IstioOperator
resource file, see the upstream Istio documentation.
module_args for the prometheus
module:
-
--prometheus-helm-module helm_module
-
The name of the
helm
module that Prometheus is to be associated with.Important:
This option is deprecated. Use the--prometheus-kubernetes-module
option instead. -
--prometheus-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Prometheus module is to be associated with. -
--prometheus-version version
-
The version of Prometheus to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--prometheus-image container_registry
-
The container image registry and tag to use when installing Prometheus. The default is
container-registry.oracle.com/olcne/
.prometheus
-
--prometheus-namespace namespace
-
The Kubernetes namespace in which to install Prometheus. The default namespace is
default
. -
--prometheus-persistent-storage {true|false}
-
If this value is
false
, Prometheus writes its data into anemptydir
on the host where the pod is running. If the pod migrates, metric data is lost.If this value is
true
, Prometheus requisitions a KubernetesPersistentVolumeClaim
so that its data persists, despite destruction, or migration of the pod.The default is
false
. -
--prometheus-alerting-rules path
-
The path to a configuration file for Prometheus alerts.
-
--prometheus-recording-rules path
-
The path to a configuration file for Prometheus recording rules.
-
--prometheus-scrape-configuration path
-
The path to a configuration file for Prometheus metrics scraping.
module_args for the grafana
module:
-
--grafana-helm-module helm_module
-
The name of the
helm
module that Grafana is to be associated with.Important:
This option is deprecated. Use the--grafana-kubernetes-module
option instead. -
--grafana-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Grafana module is to be associated with. -
--grafana-version version
-
The version of Grafana to install. The default is the latest version. For information on the latest version number, see Release Notes.
-
--grafana-container-registry container_registry
-
The container image registry and tag to use when installing Grafana. The default is
container-registry.oracle.com/olcne
. -
--grafana-namespace namespace
-
The Kubernetes namespace in which to install Grafana. The default namespace is
default
. -
--grafana-dashboard-configmaps configmap
-
The name of the ConfigMap reference that contains the Grafana dashboards.
-
--grafana-dashboard-providers path
-
The location of the file that contains the configuration for the Grafana dashboard providers.
-
--grafana-datasources path
-
The location of the file that contains the configuration for the Grafana data sources.
-
--grafana-existing-sercret-name secret
-
The name of the existing secret containing the Grafana admin password.
-
--grafana-notifiers path
-
The location of the file that contains the configuration for the Grafana notifiers.
-
--grafana-pod-annotations annotations
-
A comma separated list of annotations to be added to the Grafana pods.
-
--grafana-pod-env env_vars
-
A comma separated list of environment variables to be passed to Grafana deployment pods.
-
--grafana-service-port port
-
The port number for the Grafana service. The default is
3000
. -
--grafana-service-type service
-
The service type to access Grafana. The default is
ClusterIP
.
module_args for the gluster
module:
Important:
The Gluster Container Storage Interface module, used to install Gluster and set up Glusterfs, is deprecated. The Gluster Container Storage Interface module might be removed in a future release.
-
--gluster-helm-module helm_module
-
The name of the
helm
module that the Gluster Container Storage Interface module is to be associated with.Important:
This option is deprecated. Use the--gluster-kubernetes-module
option instead. -
--gluster-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that the Gluster module is to be associated with. -
--gluster-server-url URL
-
The URL of the Heketi API server endpoint. The default is
http://127.0.0.1:8080
. -
--gluster-server-user user
-
The username of the Heketi server admin user. The default is
admin
. -
--gluster-existing-secret-name secret
-
The name of the existing secret containing the admin password. The default is
heketi-admin
. -
--gluster-secret-key secret
-
The secret containing the admin password. The default is
secret
. -
--gluster-namespace namespace
-
The Kubernetes namespace in which to install the Gluster Container Storage Interface module. The default is
default
. -
--gluster-sc-name class_name
-
The StorageClass name for the Glusterfs StorageClass. The default is
hyperconverged
. -
--gluster-server-rest-auth {true|false}
-
Whether the Heketi API server accepts REST authorization. The default is
true
.
module_args for the helm
module:
Important:
Thehelm
module is deprecated in
Release 1.6. Helm is now automatically installed with the Kubernetes module to perform
optional module installations.
-
--helm-kubernetes-module kubernetes_module
-
The name of the
kubernetes
module that Helm is to be associated with. Each instance of Kubernetes can have one instance of Helm associated with it. -
--helm-version version
-
The version of Helm to install. The default is the latest version. For information on the latest version number, see Release Notes.
module_args for the oci-csi
module:
Important:
Theoci-csi
module is deprecated
in Release 1.5. Instead use the oci-ccm
module from Release 1.5 onwards. If
you have upgraded from Release 1.4 to 1.5, the oci-csi
module is
automatically renamed to oci-ccm
. You must also perform another step after
the upgrade to ensure the module is correctly configured. For information on upgrading the
oci-csi
module to the oci-ccm
module, see Updates and Upgrades.
-
--oci-csi-helm-module helm_module
-
The name of the
helm
module that the Oracle Cloud Infrastructure Container Storage Interface module is to be associated with.Important:
This option is deprecated. Use the--oci-ccm-kubernetes-module
option instead. -
--oci-tenancy OCID
-
The OCID for the Oracle Cloud Infrastructure tenancy.
-
--oci-region region_identifier
-
The Oracle Cloud Infrastructure region identifier. The default is
us-ashburn-1
.For a list of the region identifiers, see the Oracle Cloud Infrastructure documentation.
-
--oci-compartment OCID
-
The OCID for the Oracle Cloud Infrastructure compartment.
-
--oci-user OCID
-
The OCID for the Oracle Cloud Infrastructure user.
-
--oci-private-key path
-
The location of the private key for the Oracle Cloud Infrastructure API signing key. This must be on the primary control plane node. The default is
/root/.ssh/id_rsa
.Important:
The private key must be available on the primary control plane node. This is the first control plane node listed in the
--control-plane-nodes
option when you create the Kubernetes module. -
--oci-fingerprint fingerprint
-
The fingerprint of the public key for the Oracle Cloud Infrastructure API signing key.
-
--oci-passphrase passphrase
-
The passphrase for the private key for the Oracle Cloud Infrastructure API signing key, if one is set.
-
--oci-vcn OCID
-
The OCID for the Oracle Cloud Infrastructure Virtual Cloud Network on which the Kubernetes cluster is available.
-
--oci-lb-subnet1 OCID
-
The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.
Or, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-subnet2 OCID
-
The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-security-mode {All|Frontend|None}
-
This option sets whether the Oracle Cloud Infrastructure CSI plugin is to manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is
None
.For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.
-
--oci-container-registry container_registry
-
The container image registry to use when deploying the Oracle Cloud Infrastructure cloud provisioner image. The default is
iad.ocir.io/oracle
. -
--csi-container-registry container_registry
-
The container image registry to use when deploying the CSI component images. The default is
quay.io/k8scsi
.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-22 Creating a module for an HA cluster with an external load balancer
This example creates an HA cluster with an external load balancer, available on the host
lb.example.com
and running on port 6443
.
You must also include the location of the certificates for the
externalip-validation-webhook-service
Kubernetes service.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --load-balancer lb.example.com:6443 \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
Example 4-23 Creating a module for an HA cluster with an internal load balancer
This example example creates an HA Kubernetes cluster using the load balancer deployed by
the Platform CLI. The --virtual-ip
option sets the virtual IP address to
192.0.2.100
, which is the IP address of the primary control plane node.
The primary control plane node is the first node in the list of control plane nodes. This
cluster contains three control plane nodes and three worker nodes.
You must also include the location of the certificates for the
externalip-validation-webhook-service
Kubernetes service.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --virtual-ip 192.0.2.100 \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
Example 4-24 Creating a module for a cluster to allow access to service IP address ranges
This example example creates a Kubernetes cluster that sets the external IP addresses that
can be accessed by Kubernetes services. The IP ranges that are allowed are within the
192.0.2.0/24
and 198.51.100.0/24
CIDR blocks.
You must also include the location of the certificates for the
externalip-validation-webhook-service
Kubernetes service.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --virtual-ip 192.0.2.100 \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key \ --restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24
Example 4-25 Creating a module for a cluster to allow access to all service IP addresses
This example creates a Kubernetes cluster that allows access to all external IP addresses
for Kubernetes services. This disables the deployment of the
externalip-validation-webhook-service
Kubernetes service, which means no
validation of external IP addresses is performed for Kubernetes services, and access is
allowed for all CIDR blocks.
olcnectl module create \
--environment-name myenvironment \
--module kubernetes \
--name mycluster \
--container-registry container-registry.oracle.com/olcne \
--virtual-ip 192.0.2.100 \
--control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \
--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090 \
--selinux enforcing \
--restrict-service-externalip false
Example 4-26 Creating module for a cluster with a single control plane node
This example creates a Kubernetes module to deploy a Kubernetes cluster with a single
control plane node. The --module
option is set to
kubernetes
to create a Kubernetes module. This cluster contains one
control plane and two worker nodes.
You must also include the location of the certificates for the
externalip-validation-webhook-service
Kubernetes service.
olcnectl module create \ --environment-name myenvironment \ --module kubernetes \ --name mycluster \ --container-registry container-registry.oracle.com/olcne \ --control-plane-nodes control1.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090 \ --selinux enforcing \ --restrict-service-externalip-ca-cert /etc/olcne/certificates/restrict_external_ip/ca.cert \ --restrict-service-externalip-tls-cert /etc/olcne/certificates/restrict_external_ip/node.cert \ --restrict-service-externalip-tls-key /etc/olcne/certificates/restrict_external_ip/node.key
Example 4-27 Creating a Calico module
This example creates a Calico module to set Calico as the pod networking CNI for a
Kubernetes cluster. This example uses a Kubernetes module named mycluster
and a Calico module named mycalico
. It uses an existing configuration file
named calico-config.yaml
to configure Calico.
olcnectl module create \
--environment-name myenvironment \
--module calico \
--name mycalico \
--calico-kubernetes-module mycluster \
--calico-installation-config calico-config.yaml
Example 4-28 Creating a Multus module
This example creates a Multus module to enable bridged networking to pods. The
--module
option is set to multus
to create a Multus
module. This example uses a Kubernetes module named mycluster
and a Multus
module named mymultus
.
olcnectl module create \
--environment-name myenvironment \
--module multus \
--name mymultus \
--multus-kubernetes-module mycluster
Example 4-29 Creating a module for a service mesh
This example creates a service mesh using the Istio module. The --module
option is set to istio
to create an Istio module. This example uses a
Kubernetes module named mycluster
and an Istio module named
myistio
.
If you don't include all the required options when adding the modules you're prompted to provide them.
olcnectl module create \ --environment-name myenvironment \ --module istio \ --name myistio \ --istio-kubernetes-module mycluster
Example 4-30 Creating a module for Operator Lifecycle Manager
This example creates a module that can be used to install Operator Lifecycle Manager. The
--module
option is set to operator-lifecycle-manager
to
create an Operator Lifecycle Manager module. This example uses a Kubernetes module named
mycluster
and an Operator Lifecycle Manager module named
myolm
.
If you don't include all the required options when adding the modules you're prompted to provide them.
olcnectl module create \ --environment-name myenvironment \ --module operator-lifecycle-manager \ --name myolm \ --olm-kubernetes-module mycluster
Example 4-31 Creating a module for Oracle Cloud Infrastructure
This example creates a module that creates a Kubernetes StorageClass provisioner to access
Oracle Cloud Infrastructure storage, to provision highly available application
load balancers. The --module
option is set to oci-ccm
to
create an Oracle Cloud Infrastructure Cloud Controller Manager module. This example uses a
Kubernetes module named mycluster
and an Oracle Cloud Infrastructure Cloud Controller Manager module named myoci
.
Provide the information required to access Oracle Cloud Infrastructure using the options as shown in this example, such as:
-
--oci-tenancy
-
--oci-compartment
-
--oci-user
-
--oci-fingerprint
-
--oci-private-key-file
-
--oci-vcn
-
--oci-lb-subnet1
-
--oci-lb-subnet2
You might need to provide more options to access Oracle Cloud Infrastructure, depending on the environment.
If you don't include all the required options when adding the modules you're prompted to provide them.
olcnectl module create \ --environment-name myenvironment \ --module oci-ccm \ --name myoci \ --oci-ccm-kubernetes-module mycluster \ --oci-tenancy ocid1.tenancy.oc1..unique_ID \ --oci-compartment ocid1.compartment.oc1..unique_ID \ --oci-user ocid1.user.oc1..unique_ID \ --oci-fingerprint b5:52:... \ --oci-private-key-file /home/opc/.oci/oci_api_key.pem \ --oci-vcn ocid1.vcn.oc1..unique_ID \ --oci-lb-subnet1 ocid1.subnet.oc1..unique_ID \ --oci-lb-subnet2 ocid1.subnet.oc1..unique_ID
Module Install
Installs a module in an environment. When you install a module, the nodes are checked to ensure they're set up correctly to run the module. If the nodes aren't set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.
Syntax
olcnectl module install
{-E|--environment-name} environment_name
{-N|--name} name
[{-g|--generate-scripts}]
[{-L|--log-level} type]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-g|--generate-scripts}
-
Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named
hostname:8090.sh
. -
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Instances
Lists the installed modules in an environment.
Syntax
olcnectl module instances
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module List
Lists the available modules for an environment.
Syntax
olcnectl module list
{-E|--environment-name} environment_name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Property Get
Lists the value of a module property.
Syntax
olcnectl module property get
{-E|--environment-name} environment_name
{-N|--name} name
{-P|--property} property_name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-P|--property} property_name
-
The name of the property. You can get a list of the available properties using the
olcnectl module property list
command. -
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Property List
Lists the available properties for a module in an environment.
Syntax
olcnectl module property list
{-E|--environment-name} environment_name
{-N|--name} name
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Report
Reports summary and detailed information about module and properties in an environment.
Syntax
olcnectl module report
{-E|--environment-name} environment_name
[{-N|--name} name]
[--children]
[--exclude pattern]
[--include pattern]
[--format {yaml|table}]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment. When no name is specified, the output of the command contains information about all modules deployed in the selected environment.
-
--children
-
When added to the command, this option recursively displays the properties for all children of a module instance. The default value is
false
. -
--exclude pattern
-
An RE2 regular expression selecting the properties to exclude from the report. This option might specify more than one property as a comma separated lists.
-
--include pattern
-
An RE2 regular expression selecting the properties to include in the report. This option might specify more than one property as a comma separated lists. By default, all properties are displayed. Using this option one or more times overrides this behavior.
-
--format {yaml|table}
-
To generate reports in YAML or table format, use this option. The default format is
table
. -
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-37 Reporting summary details about an environment
To report a summary of all modules deployed in the environment named
myenvironment
:
olcnectl module report \ --environment-name myenvironment \
Example 4-38 Reporting summary details about a Kubernetes module
To report summary details about a Kubernetes module named mycluster
:
olcnectl module report \ --environment-name myenvironment \ --name mycluster
Example 4-39 Reporting comprehensive details about a Kubernetes module
To report comprehensive details about a Kubernetes module named
mycluster
:
olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children
Module Restore
Restores a module from a back in an environment.
Syntax
olcnectl module restore
{-E|--environment-name} environment_name
{-N|--name} name
[{-g|--generate-scripts}]
[{-F|--force}]
[{-L|--log-level} type]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-g|--generate-scripts}
-
Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named
hostname:8090.sh
. -
{-F|--force}
-
Skips the confirmation prompt.
-
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Uninstall
Uninstalls a module from an environment. Uninstalling the module also removes the module configuration from the Platform API Server.
Syntax
olcnectl module uninstall
{-E|--environment-name} environment_name
{-N|--name} name
[{-F|--force}]
[{-L|--log-level} type]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-F|--force}
-
Skips the confirmation prompt.
-
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-41 Uninstalling a module
To uninstall a Kubernetes module named mycluster
from an environment
named myenvironment
:
olcnectl module uninstall \ --environment-name myenvironment \ --name mycluster
In this example, the Kubernetes containers are stopped and deleted on each node, and the Kubernetes cluster is removed.
Module Update
Updates a module in an environment. The module configuration is automatically retrieved from the Platform API Server. This command can be used to:
-
Update the Kubernetes release on nodes to the latest errata release.
-
Upgrade the Kubernetes release on nodes to the latest release.
-
Update or upgrade other modules and components.
-
Scale up a Kubernetes cluster (add control plane or worker nodes).
-
Scale down a Kubernetes cluster (remove control plane or worker nodes).
Important:
Before you update or upgrade the Kubernetes cluster, ensure you have updated or upgraded Oracle Cloud Native Environment to the latest release. For information on updating or upgrading Oracle Cloud Native Environment, see Updates and Upgrades.
Syntax
olcnectl module update
{-E|--environment-name} environment_name
{-N|--name} name
[{-g|--generate-scripts}]
[{-F|--force}]
[{-L|--log-level} type]
[{-h|--help}]
[module_args ...]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-g|--generate-scripts}
-
Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named
hostname:8090.sh
. -
{-F|--force}
-
Skips the confirmation prompt.
-
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where module_args is:
The value of module_args is one or more arguments to update a module in an environment.
module_args for the kubernetes
module:
-
{-k|--kube-version}
version -
Sets the Kubernetes version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
If this option isn't provided, any Kubernetes errata updates are installed.
-
{-r|--container-registry} container_registry
-
The container registry that contains the Kubernetes images when performing an update or upgrade. Use the Oracle Container Registry or a local registry to pull the Kubernetes images.
This option lets you update or upgrade using a different container registry. This option sets the default container registry during all later updates or upgrades and need only be used when changing the default container registry.
-
{-m|--master-nodes} nodes ...
-
A comma-separated list of the hostnames or IP addresses of the Kubernetes control plane nodes that are to remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any control plane nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.
The default port number for the Platform Agent is
8090
. For example,control1.example.com:8090,control2.example.com:8090
.Important:
This option is deprecated. Use the--control-plane-nodes
option instead. -
{-c|--control-plane-nodes} nodes ...
-
A comma-separated list of the hostnames or IP addresses of the Kubernetes control plane nodes that are to remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any control plane nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.
The default port number for the Platform Agent is
8090
. For example,control1.example.com:8090,control2.example.com:8090
. -
{-w|--worker-nodes} nodes ...
-
A comma-separated list of the hostnames or IP addresses of the Kubernetes worker nodes that are to remain in or be added to the Kubernetes cluster, including the port number for the Platform Agent. Any worker nodes not included in this list are removed from the cluster. The nodes in this list are the nodes that are to be included in the cluster.
The default port number for the Platform Agent is
8090
. For example,worker1.example.com:8090,worker2.example.com:8090
. -
--compact {true|false}
-
Sets whether to let non-system Kubernetes workloads to run on control plane nodes. The default is
false
.If you set this to
true
, the Platform API Server untaints control plane nodes if they're tainted. This lets non-system Kubernetes workloads to be scheduled and run on control plane nodes.If you set this to
false
, the Platform API Server taints the control plane nodes if they're untainted. This prevents non-system Kubernetes workloads to be scheduled and run on control plane nodes.Important:
For production environments, this option must be set to
false
(the default). -
--nginx-image container_location
-
The location of the NGINX container image to update. This is optional.
This option pulls the NGINX container image from the container registry location you specify to update NGINX on the control plane nodes. For example:
--nginx-image container-registry.oracle.com/olcne/nginx:1.17.7
-
--helm-version version
-
Sets the Helm version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--restrict-service-externalip {true|false}
-
Sets whether to restrict access to external IP addresses for Kubernetes services. The default is
true
, which restricts access to external IP addresses.This option deploys a Kubernetes service named
externalip-validation-webhook-service
to validateexternalIPs
set in Kubernetes service configuration files. Access to any external IP addresses is set in a Kubernetes service configuration file using theexternalIPs
option in thespec
section. -
--restrict-service-externalip-ca-cert path
-
The path to a CA certificate file for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/ca.cert
. -
--restrict-service-externalip-tls-cert path
-
The path to a CA certificate file for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/node.cert
. -
--restrict-service-externalip-tls-key path
-
The path to the private key for the
externalip-validation-webhook-service
application that's deployed when the--restrict-service-externalip
option is set totrue
. For example,/etc/olcne/certificates/restrict_external_ip/node.key
. -
--restrict-service-externalip-cidrs allowed_cidrs
-
Enter one or more comma separated CIDR blocks to allow only IP addresses from the specified CIDR blocks. For example,
192.0.2.0/24,198.51.100.0/24
.
-
--selinux {enforcing|permissive}
-
Whether to use SELinux
enforcing
orpermissive
mode.permissive
is the default.Use this option if SELinux is set to
enforcing
on the control plane and worker nodes. SELinux is set toenforcing
mode by default on the OS and is the recommended mode.
module_args for the calico
module:
-
--calico-version version
-
Sets the Calico version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--calico-container-registry container_registry
-
The container registry that contains the Calico images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Calico images.
module_args for the multus
module:
-
--multus-version version
-
Sets the Multus version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--multus-container-registry container_registry
-
The container registry that contains the Multus images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Multus images.
module_args for the oci-ccm
module:
-
--oci-ccm-version version
-
Sets the Oracle Cloud Infrastructure Cloud Controller Manager version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--oci-container-registry container_registry
-
The container image registry to use when updating the Oracle Cloud Infrastructure cloud provisioner image. The default is an empty string. The Platform API Server decides the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that's to be upgrade. Or, you can use a private registry.
-
--ccm-container-registry container_registry
-
The container image registry to use when updating the Oracle Cloud Infrastructure Cloud Controller Manager component images. The default is an empty string. The Platform API Server decides the correct repository for the version of the Oracle Cloud Infrastructure Cloud Controller Manager module that's to be upgraded. Or, you can use a private registry.
-
--oci-lb-subnet1 OCID
-
The OCID of the regional subnet for the Oracle Cloud Infrastructure load balancer.
Or, the OCID of the first subnet of the two required availability domain specific subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-subnet2 OCID
-
The OCID of the second subnet of the two subnets for the Oracle Cloud Infrastructure load balancer. The subnets must be in separate availability domains.
-
--oci-lb-security-mode {All|Frontend|None}
-
This option sets whether the Oracle Cloud Infrastructure Cloud Controller Manager module is to manage security lists for load balancer services. This option sets the configuration mode to use for security lists managed by the Kubernetes Cloud Controller Manager. The default is
None
.For information on the security modes, see the Kubernetes Cloud Controller Manager implementation for Oracle Cloud Infrastructure documentation.
module_args for the rook
module:
-
--rook-version version
-
Sets the Rook version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--rook-container-registry container_registry
-
The container image registry and optional tag to use when upgrading Rook. The default is
container-registry.oracle.com/olcne
.
module_args for the kubevirt
module:
-
--kubevirt-version version
-
Sets the KubeVirt version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--kubevirt-config path
-
The location of the file that contains the configuration information for KubeVirt, including the
kubevirt.io/v1/KubeVirt
object. This file must be on operator node. -
--kubevirt-container-registry container_registry
-
The container image registry and optional tag to use when upgrading KubeVirt. The default is
container-registry.oracle.com/olcne
.
module_args for the metallb
module:
-
--metallb-version version
-
Sets the MetalLB version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--metallb-container-registry container_registry
-
The container image registry and optional tag to use when upgrading MetalLB. The default is
container-registry.oracle.com/olcne
.
module_args for the operator-lifecycle-manager
module:
-
--olm-version version
-
Sets the Operator Lifecycle Manager version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
module_args for the ingress-nginx
module:
-
--ingress-nginx-version version
-
Sets the NGINX Ingress Controller version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--ingress-nginx-container-registry container_registry
-
The container registry that contains the NGINX Ingress Controller images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the NGINX Ingress Controller images.
module_args for the istio
module:
-
--istio-version version
-
Sets the Istio version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--istio-container-registry container_registry
-
The container registry that contains the Istio images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Istio images.
module_args for the prometheus
module:
-
--prometheus-version version
-
Sets the Prometheus version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--prometheus-container-registry container_registry
-
The container registry that contains the Prometheus images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Prometheus images.
module_args for the grafana
module:
-
--grafana-version version
-
Sets the Grafana version for the upgrade. The default is the latest version. For information on the latest version number, see Release Notes.
-
--grafana-container-registry container_registry
-
The container registry that contains the Grafana images when performing an update or upgrade. Use the Oracle Container Registry (the default) or a local registry to pull the Grafana images.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-42 Scaling a cluster
To scale up a cluster, list all nodes to be included in the
cluster. If an existing cluster includes two control plane and
two worker nodes, and you want to add a new control plane and
a new worker, list all the nodes to include. For example, to
add a control3.example.com
control plane
node, and a worker3.example.com
worker node
to a Kubernetes module named
mycluster
:
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090,control3.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090
To scale down a cluster, list all the nodes to be included in
the cluster. To remove the
control3.example.com
control plane node,
and worker3.example.com
worker node from
the kubernetes
module named
mycluster
:
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --control-plane-nodes control1.example.com:8090,control2.example.com:8090 \ --worker-nodes worker1.example.com:8090,worker2.example.com:8090
As the control3.example.com
control plane node and
worker3.example.com
worker node aren't listed in the
--control-plane-nodes
and --worker-nodes
options, the
Platform API Server removes those nodes from the cluster.
Example 4-43 Updating the Kubernetes release for errata updates
To update a Kubernetes module named
mycluster
in an environment named
myenvironment
to the latest Kubernetes errata
release, enter:
olcnectl module update \ --environment-name myenvironment \ --name mycluster
The nodes in the environment are updated to the latest Kubernetes errata release.
Example 4-44 Updating using a different container registry
To update a Kubernetes module named
mycluster
in an environment named
myenvironment
to the latest Kubernetes errata
release using a different container registry than the default
specified when creating the Kubernetes module,
enter:
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --container-registry container-registry-austin-mirror.oracle.com/olcne/
The nodes in the environment are updated to the latest Kubernetes errata release contained on the mirror container registry.
Example 4-45 Upgrading the Kubernetes release
To upgrade a Kubernetes module named mycluster
in an environment named
myenvironment
to Kubernetes Release 1.29.3,
enter:
olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--kube-version 1.29.3
The --kube-version
option specifies the release to which you want to
upgrade. This example uses release number .
Ensure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.
The nodes in the environment are updated to Kubernetes Release 1.29.3.
Example 4-46 Upgrading using a different container registry
To upgrade a Kubernetes module named mycluster
in an environment named
myenvironment
to Kubernetes Release 1.29.3
using a different container registry than the current default container registry, enter:
olcnectl module update \
--environment-name myenvironment \
--name mycluster \
--container-registry container-registry-austin-mirror.oracle.com/olcne/ \
--kube-version 1.29.3
The --kube-version
option specifies the release to which you want to
upgrade. This example uses release number 1.29.3.
The specified container registry becomes the new default container registry for all later
updates or upgrades.
Ensure you upgrade to the latest Kubernetes release. To get the version number of the latest Kubernetes release, see Release Notes.
The nodes in the environment are updated to Kubernetes 1.29.3.
Example 4-47 Setting access to external IP addresses for Kubernetes services
This example sets the range of external IP addresses that Kubernetes services can access.
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24
Example 4-48 Changing SELinux settings
This example updates the configuration with the
Platform API Server that nodes in the Kubernetes cluster have SELinux
enforcing
mode enabled.
olcnectl module update \ --environment-name myenvironment \ --name mycluster \ --selinux enforcing
Module Validate
Validates a module for an environment. When you validate a module, the nodes are checked to ensure they're set up correctly to run the module. If the nodes aren't set up correctly, the commands required to fix each node are shown in the output and optionally saved to files.
Syntax
olcnectl module validate
{-E|--environment-name} environment_name
{-N|--name} name
[{-g|--generate-scripts}]
[{-L|--log-level} type]
[{-h|--help}]
[globals]
Where:
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
{-g|--generate-scripts}
-
Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named
hostname:8090.sh
. -
{-L|--log-level} type
-
Sets the type of messages displayed by the Platform API Server. If you don't set this option,
error
messages are displayed by default. The options fortype
are:-
error
: Displays error messages. This is the default type. -
warn
: Displays warning messages. -
info
: Displays information messages. -
debug
: Displays all messages. This is the most detailed message level.
-
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Module Version
Lists the latest module versions in the Oracle Container Registry that can be installed for the Oracle Cloud Native Environment release that's installed.
You can use the command before running olcnectl module update
to see which
versions of modules and their subcomponents can be installed.
The command parameters enable you to choose whether to print version information for one specific module and its subcomponents, or for all available modules.
Syntax
olcnectl module version
{-E|--environment-name} environment_name
[{--all | {-M|--module} module_name}]
[--print-subcomponents]
[{-o|--output} {pretty|yaml|json}]
[{-h|--help}]
[globals]
Where: -
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{--all | {-M|--module} module_name}
-
Specifies whether to print version information for all available modules, or for the specified module only. Options are:
--all
: Prints versions for all available modules.{-M|--module} module_name
: Prints only the version of the module named module_name.
-
--print-subcomponents
-
Specifies that versions of the module subcomponents are also to be included in the output. This is optional.
-
{-o|--output} {pretty|yaml|json}
-
Prints the output in the specified format. The format options are
pretty
(human readable),yaml
andjson
. The default ispretty
. This is optional. -
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Examples
Example 4-50 Printing latest versions for all available modules
To print the latest versions for all modules in the Oracle Container Registry that
can be installed for the environment named myenvironment
, run the
following:
olcnectl module version \ --environment-name myenvironment \ --all
Example 4-51 Printing latest version of a specific module in YAML format
To print the latest version of a specific module, such as
kubernetes
, available for the environment named
myenvironment
, in yaml
format, run the
following:
olcnectl module version \ --environment-name myenvironment \ --module kubernetes \ --output yaml
Example 4-52 Printing latest versions of a specific module and its subcomponents
To print the latest versions of a specific module, such as
kubernetes
, and its subcomponents, available for the
environment named myenvironment
, run the following:
olcnectl module version \ --environment-name myenvironment \ --module kubernetes \ --print-subcomponents \
Node Install-Agent
Installs the Platform Agent software packages on Kubernetes nodes.
Syntax
olcnectl node install-agent
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where:
-
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Node Install-Api-Server
Installs the Platform API Server software packages on the operator node.
Syntax
olcnectl node install-api-server
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where:
-
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Node Install-Certificates
Installs the CA Certificates and key for the Platform API Server and Platform Agent to the nodes, with the appropriate file ownership.
The certificates and key:
- Are copied to
/etc/olcne/certificates
on the nodes. - Have ownership of the certificate files (using
chown
) set toolcne:olcne
. - Have permissions of the certificate files (using
chmod
) set to0440
.
Syntax
olcnectl node install-certificates
[{-c|--certificate} path]
[{-C|--certificate-authority-chain} path]
[{-h|--help}]
[{-K|--key} path]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where :
-
{-c|--certificate} path
-
The path to the
node.cert
certificate. -
{-C|--certificate-authority-chain} path
-
The path to the
ca.cert
Certificate Authority certificate. -
{-h|--help}
-
Lists information about the command and the available options.
-
{-K|--key} path
-
The path to the
node.key
key. -
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Node Setup-Kubernetes
Sets up nodes to prepare for an installation of the Kubernetes module.
Configures hosts so they can be used as either Kubernetes control plane or worker nodes. Performs operations such as configuring firewall rules and opening network ports. Before changes are made to the hosts, a prompt is displayed to list the changes to be made and asks for confirmation.
Syntax
olcnectl node setup-kubernetes
{-a|--api-server} api-server-address
[--control-plane-ha-nodes nodes ]
[{-c|--control-plane-nodes} nodes]
[{-d|--debug}]
[{-h|--help}]
[{-m|--master-nodes} nodes] (Deprecated)
[{-n|--nodes} nodes]
[{-R|--remote-command} remote-command]
[{-r|--role} role]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-w|--worker-nodes} nodes]
[{-y|--yes}]
Where:
-
{-a|--api-server} api-server-address
-
The hostname or IP address of the Platform API Server host.
-
{-c|--control-plane-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example,
control1.example.com,control2.example.com
. -
--control-plane-ha-nodes nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes in a High Availability cluster. For example,
control1.example.com,control2.example.com,control3.example.com
. -
{-d|--debug}
-
Enable debug logging.
-
{-h|--help}
-
Lists information about the command and the available options.
-
{-m|--master-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example,
control1.example.com,control2.example.com,control3.example.com
.Note:
This argument has been deprecated. Use the--control-plane-nodes
argument instead. -
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-r|--role} role
-
The role of a host. The roles are one of:
-
api-server
: The Platform API Server. -
control-plane
: A Kubernetes control plane node for a cluster that has only one. -
control-plane-ha
: A Kubernetes control plane node for a cluster that has more than one. -
worker
: A Kubernetes worker node.
-
-
--selinux {permissive|enforcing}
-
Sets whether SELinux is to be set to
enforcing
orpermissive
. The default ispermissive
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes. -
{-w|--worker-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example,
worker1.example.com,worker2.example.com
. -
{-y|--yes}
-
Sets whether to assume the answer to a confirmation prompt is affirmative (
yes
).
Node Setup-Package-Repositories
Sets up the software package repositories on nodes.
Syntax
olcnectl node setup-package-repositories
[{-d|--debug}]
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
Where:
-
{-d|--debug}
-
Enable debug logging.
-
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes.
Node Setup-Platform
Installs the Oracle Cloud Native Environment platform (Platform API Server and Platform Agent) and starts the platform services.
Configures, installs, and starts the Oracle Cloud Native Environment platform components on a set of hosts.
-
Configures the yum software package repositories.
- Configures networking ports.
-
Installs the Platform API Server and Platform Agent.
-
Generates and installs CA Certificates.
- Starts the platform services (
olcne-api-server.service
andolcne-agent.service
).
Syntax
olcnectl node setup-platform
{-a|--api-server} api-server-address
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[--control-plane-ha-nodes nodes ]
[--control-plane-nodes nodes]
[{-d|--debug}]
[{-h|--help}]
[--http-proxy proxy-server]
[--https-proxy proxy-server]
[--no-proxy no_proxy]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[{-r|--role} role]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-w|--worker-nodes} nodes]
[{-y|--yes}]
Where:
-
{-a|--api-server} api-server-address
-
The hostname or IP address of the Platform API Server host.
-
--byo-ca-cert certificate-path
-
The path to an existing public CA Certificate.
-
--byo-ca-key key-path
-
The path to an existing private key.
-
--cert-dir certificate-directory
-
The directory to read or write key material generated by this utility. The default is
<CURRENT_DIR>/certificates
. -
--cert-request-common-name common_name
-
The Certificate Common Name suffix. The default is
example.com
. -
--cert-request-country country
-
The two letter country code of the company, for example,
US
for the United States,GB
for the United Kingdom andCN
for China. The default isUS
. -
--cert-request-locality locality
-
The name of the city where the company is located. The default is
Redwood City
. -
--cert-request-organization organization
-
The name of the company. The default is
OLCNE
. -
--cert-request-organization-unit organization-unit
-
The name of the department within the company. The default is
OLCNE
. -
--cert-request-state state
-
The name of the state or province where the company is located. The default is
California
. -
--control-plane-ha-nodes nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes in a High Availability cluster. For example,
control1.example.com,control2.example.com,control3.example.com
. -
{-c|--control-plane-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example,
control1.example.com,control2.example.com
. -
{-d|--debug}
-
Enable debug logging.
-
{-h|--help}
-
Lists information about the command and the available options.
-
--http-proxy proxy-server
-
The location of the HTTP proxy server if required.
-
--https-proxy proxy-server
-
The location of the HTTPS proxy server if required.
-
--no-proxy no_proxy
-
The list of hosts for which to exclude from the proxy server settings.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
--one-cert
-
Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option isn't set.
-
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-r|--role} role
-
The role of a host. The roles are one of:
-
api-server
: The Platform API Server. -
control-plane
: A Kubernetes control plane node for a cluster that has only one. -
control-plane-ha
: A Kubernetes control plane node for a cluster that has more than one. -
worker
: A Kubernetes worker node.
-
-
--selinux {permissive|enforcing}
-
Sets whether SELinux is to be set to
enforcing
orpermissive
. The default ispermissive
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes. -
{-w|--worker-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example,
worker1.example.com,worker2.example.com
. -
{-y|--yes}
-
Sets whether to assume the answer to a confirmation prompt is affirmative (
yes
).
Examples
Example 4-57 Install the platform using default options
This example installs the Oracle Cloud Native Environment platform using default options.
olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com
Example 4-58 Install the platform for an HA deploy using default options
This example installs the Oracle Cloud Native Environment platform using default options.
olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-ha-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com
Example 4-59 Install the platform for an HA deploy using default options
This example installs the Oracle Cloud Native Environment platform using default options.
olcnectl node setup-platform \
--api-server operator.example.com \
--control-plane-ha-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker2.example.com
Example 4-60 Install the Platform API Server
This example installs the Platform API Server and accepts all prompts.
olcnectl node setup-platform \
--nodes operator.example.com \
--role api-server \
--yes
Example 4-61 Install the Platform Agent on HA control plane nodes
This example installs the Platform Agent on Kubernetes control plane nodes for an HA install. This accepts all prompts.
olcnectl node setup-platform \
--nodes control1.example.com,control2.example.com,control3.example.com \
--role control-plane-ha \
--yes
Example 4-62 Install the Platform Agent on worker nodes with a proxy server and SSH login information
This example installs the Platform Agent on Kubernetes worker nodes. This uses proxy server information, provides SSH login information, and accepts all prompts.
olcnectl node setup-platform \
--nodes worker1.example.com,worker2.example.com,worker2.example.com \
--role worker \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--http-proxy "http://www-proxy.example.com:80" \
--https-proxy "https://www-proxy.example.com:80" \
--no-proxy ".example.com" \
--yes
Node Start-Platform
Starts the Platform API Server service on the operator node and the Platform Agent service on Kubernetes nodes.
Syntax
olcnectl node start-platform
[{-h|--help}]
{-n|--nodes} nodes
[{-R|--remote-command} remote-command]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[{-y|--yes}]
Where:
-
{-h|--help}
-
Lists information about the command and the available options.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes. -
{-y|--yes}
-
Sets whether to assume the answer to a confirmation prompt is affirmative (
yes
).
Operation List
Lists the available operation logs.
Syntax
olcnectl operation list
[{-h|--help}]
[globals]
Where:
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Operation Logs
Displays the content of an operation log.
Syntax
olcnectl operation logs
{{-I|--operation-id} log_id}
[{-h|--help}]
[globals]
Where:
-
{-I|--operation-id} log_id
-
The identifier for an operation.
-
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Operation Follow
Displays the content of an operation log as it's generated and after the operation is completed.
Syntax
olcnectl operation follow
{{-I|--operation-id} log_id}
[{-g|--generate-scripts}]
[{-h|--help}]
[globals]
Where:
-
{-I|--operation-id} log_id
-
The identifier for an operation.
-
{-g|--generate-scripts}
-
Generates a set of scripts which contain the commands required to fix any set up errors for the nodes in a module. A script is created for each node in the module, saved to the local directory, and named
hostname:8090.sh
. -
{-h|--help}
-
Lists information about the command and the available options.
Where globals is one or more of the global options as described in Using Global Flags.
Provision
Sets up the nodes and installs the Oracle Cloud Native Environment platform (Platform API Server and Platform Agents), including creating and installing the Kubernetes module.
This command configures all nodes, creates, and distributes certificates, and keys, installs the Platform API Server and Platform Agents, starts the required system services, and creates, and installs an instance of the Kubernetes module. This provides a quick installation of Oracle Cloud Native Environment with a Kubernetes cluster.
When you run this command, a prompt is displayed that lists the changes to be made to the
hosts and asks for confirmation. To avoid this prompt, use the --yes
option.
More complex deployments can be made by using a configuration file with the
--config-file
option.
Syntax
olcnectl provision
{-a|--api-server} api-server-address
[--byo-ca-cert certificate-path]
[--byo-ca-key key-path]
[--cert-dir certificate-directory]
[--cert-request-common-name common_name]
[--cert-request-country country]
[--cert-request-locality locality]
[--cert-request-organization organization]
[--cert-request-organization-unit organization-unit]
[--cert-request-state state]
[--config-file config-file-path]
[--container-registry registry]
{-c|--control-plane-nodes} nodes
[{-d|--debug}]
{-E|--environment-name} environment_name
[{-h|--help}]
[--http-proxy proxy-server]
[--https-proxy proxy-server]
[--load-balancer load-balancer]
{-m|--master-nodes} nodes (Deprecated)
{-N|--name} name
[--no-proxy no_proxy]
[{-n|--nodes} nodes]
[--one-cert]
[{-R|--remote-command} remote-command]
[--restrict-service-externalip-cidrs allowed_cidrs]
[--selinux {permissive|enforcing}]
[{-i|--ssh-identity-file} file_location]
[{-l|--ssh-login-name} username]
[--timeout minutes]
[--virtual-ip IP_address]
{-w|--worker-nodes} nodes
[{-y|--yes}]
Where:
-
{-a|--api-server} api-server-address
-
The hostname or IP address of the Platform API Server host.
-
--byo-ca-cert certificate-path
-
The path to an existing public CA Certificate.
-
--byo-ca-key key-path
-
The path to an existing private key.
-
--cert-dir certificate-directory
-
The directory to read or write key material generated by this utility. The default is
<CURRENT_DIR>/certificates
. -
--cert-request-common-name common_name
-
The Certificate Common Name suffix. The default is
example.com
. -
--cert-request-country country
-
The two letter country code of the company, for example,
US
for the United States,GB
for the United Kingdom andCN
for China. The default isUS
. -
--cert-request-locality locality
-
The name of the city where the company is located. The default is
Redwood City
. -
--cert-request-organization organization
-
The name of the company. The default is
OLCNE
. -
--cert-request-organization-unit organization-unit
-
The name of the department within the company. The default is
OLCNE
. -
--cert-request-state state
-
The name of the state or province where the company is located. The default is
California
. -
--config-file config-file-path
-
The path and location of an Oracle Cloud Native Environment configuration file.
-
--container-registry registry
-
The container registry from which to pull the Kubernetes container images. The default is
container-registry.oracle.com/olcne
. -
{-c|--control-plane-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example,
control1.example.com,control2.example.com
. -
{-d|--debug}
-
Enable debug logging.
-
{-E|--environment-name} environment_name
-
The Oracle Cloud Native Environment. The value of environment_name is the name to use to identify an environment.
-
{-h|--help}
-
Lists information about the command and the available options.
-
--http-proxy proxy-server
-
The location of the HTTP proxy server if required.
-
--https-proxy proxy-server
-
The location of the HTTPS proxy server if required.
-
--load-balancer load-balancer
-
The location of the external load balancer if required.
-
{-m|--master-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes control plane nodes. For example,
control1.example.com,control2.example.com,control3.example.com
.Note:
This argument has been deprecated. Use the--control-plane-nodes
argument instead. -
{-N|--name} name
-
The module name. The value of name is the name to use to identify a module in an environment.
-
--no-proxy no_proxy
-
The list of hosts for which to exclude from the proxy server settings.
-
{-n|--nodes} nodes
-
A comma separated list of the hostnames or IP addresses of nodes.
Sets the nodes on which to perform an action. Any nodes that aren't the local node use the command indicated by
--remote-command
to connect to the host (by default,ssh
). If a node address resolves to the local system, all commands are run locally without using the remote command. -
--one-cert
-
Sets whether to generate a single certificate that can be used to authenticate all the hosts. By default this option isn't set.
-
{-R|--remote-command} remote-command
-
Opens a connection to a remote host. The address of the host and the command are appended to the end of this command. For example:
ssh -i ~/.ssh/myidfile -l myuser
The default remote command is
ssh
. -
--restrict-service-externalip-cidrs allowed_cidrs
-
Enter one or more comma separated CIDR blocks to allow only IP addresses from the specified CIDR blocks. For example,
192.0.2.0/24,198.51.100.0/24
. -
--selinux {permissive|enforcing}
-
Sets whether SELinux is to be set to
enforcing
orpermissive
. The default ispermissive
. -
{-i|--ssh-identity-file} file_location
-
The location of the SSH identity file. If no value is specified, the OS defaults are used.
-
{-l|--ssh-login-name} username
-
The username to log in using SSH. The default is
opc
. -
--timeout minutes
-
The number of minutes to set for command timeouts. The default is
40
minutes. -
--virtual-ip IP_address
-
The virtual IP address to use for the internal load balancer.
-
{-w|--worker-nodes} nodes
-
A comma separated list of the hostnames or IP addresses of the Kubernetes worker nodes. For example,
worker1.example.com,worker2.example.com
. -
{-y|--yes}
-
Sets whether to assume the answer to a confirmation prompt is affirmative (
yes
).
Examples
Example 4-67 Quick install
To perform a quick install:
olcnectl provision \
--api-server operator.example.com \
--control-plane-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster
Example 4-68 Quick install with SSH log in information
To perform a quick install using SSH log in information and accepting all prompts:
olcnectl provision \
--api-server operator.example.com \
--control-plane-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--yes
Example 4-69 Quick HA install with an external load balancer
To perform a quick HA install using an external load balancer and accepting all prompts:
olcnectl provision \
--api-server operator.example.com \
--control-plane-nodes control1.example.com,control2.example.com,control3.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--load-balancer lb.example.com:6443 \
--yes
Example 4-70 Quick install using a proxy server
To perform a quick install using SSH log in information and a proxy server, and accepting all prompts:
olcnectl provision \
--api-server operator.example.com \
--control-plane-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--http-proxy "http://www-proxy.example.com:80" \
--https-proxy "https://www-proxy.example.com:80" \
--no-proxy ".example.com" \
--yes
Example 4-71 Quick install with externalIPs service
To perform a quick install that includes the externalIPs
Kubernetes
service, specify the CIDR blocks to be allowed access to the service. This deploys the
externalip-validation-webhook-service
Kubernetes service:
olcnectl provision \
--api-server operator.example.com \
--control-plane-nodes control1.example.com \
--worker-nodes worker1.example.com,worker2.example.com,worker3.example.com \
--environment-name myenvironment \
--name mycluster \
--restrict-service-externalip-cidrs 192.0.2.0/24,198.51.100.0/24
Example 4-72 Quick install using a configuration file
To perform a quick install using a configuration file and SSH log in information:
olcnectl provision \
--config-file myenvironment.yaml \
--ssh-identity-file ~/.ssh/id_rsa \
--ssh-login-name oracle \
--yes