Microservice Cluster Setup
Before you can deploy microservices, you must set up Kubernetes clusters. To minimize the complexities of creating and maintaining Kubernetes clusters, Unified Assurance provides the clusterctl application, which operates as a frontend to the Rancher Kubernetes Engine (RKE) command line tool and provides the opinionated setup configuration.
Before setting up a cluster:
-
Review the architecture and components described in Understanding Microservices in Unified Assurance Concepts.
-
Ensure your system meets the Linux prerequisites and that you have opened the ports in your firewall to enable communication between servers as described in Opening Ports in Unified Assurance Installation Guide.
Setting up a microservice cluster involves the following steps:
You can optionally customize the cluster configuration file as described in Customizing the Cluster Configuration File.
See Troubleshooting for tips on using Kubernetes commands to get information about your Kubernetes pods and microservices.
Setting Environment Variables
Most of the commands for setting up clusters must be run as the root user. Before running the commands, as the root user, set the LD_LIBRARY_PATH environment variable by running either of the following commands:
-
export LD_LIBRARY_PATH=$A1BASEDIR/lib:$A1BASEDIR/lib/private
-
source $A1BASEDIR/.bashrc
Installing Roles on Servers
You must install the Cluster.Master and Cluster.Worker roles on one or more servers. For single-server development systems, install both roles. For production systems, each data center or availability zone should have at least three servers with the Cluster.Master role. These servers can also have the Cluster.Worker role, depending on the resources available.
Run the following commands as the root user to install the roles:
-
Install both the Cluster.Master and Cluster.Worker roles by using the Cluster meta-role:
$A1BASEDIR/bin/Package install-role Cluster
-
Install only the Cluster.Master role:
$A1BASEDIR/bin/Package install-role Cluster.Master
-
Install only the Cluster.Worker role:
$A1BASEDIR/bin/Package install-role Cluster.Worker
Note:
You can install roles on new servers when you run SetupWizard by specifying the roles in the --Roles option.
Creating SSH Keys
You enable access between servers in the same Unified Assurance instance by creating SSH keys.
On each server other than the primary presentation server, run the following command as the assure1 user:
$A1BASEDIR/bin/CreateSSLCertificate --Type SSH
Confirming the SELinux Configuration
When SELinux is enabled, the $A1BASEDIR/var/rke directory is required with context type rke_opt_t.
Confirm that it exists with the correct context by running the following command:
ls -ldZ var/rke
The output should include rke_opt_t, similar to the following:
drwxr-xr-x. 4 root root unconfined_u:object_r:rke_opt_t:s0 <date_and_time> var/rke
The directory may have been removed by the clusterctl clean command when recreating a misconfigured cluster.
If the directory is missing or the context type does not match, recreate it and reset the configuration by running the following commands as the root user:
mkdir $A1BASEDIR/var/rke
restorecon -R -v $A1BASEDIR/var/rke
Creating Clusters
You use the clusterctl command line application to manage clusters. The application determines the servers in each cluster by the cluster master and worker roles, and whether those servers have been associated with an existing cluster.
To create a cluster, run the following command as the root user:
$A1BASEDIR/bin/cluster/clusterctl create <cluster_name>
In the command, <cluster_name> is the name of the cluster. Use a name relevant to the servers being added to the cluster. For example:
-
PrimaryPresentationCluster
-
RedundantPresentationCluster
-
PrimaryCollectionCluster
-
RedundantCollectionCluster
The default namespaces are automatically created and added to the cluster. Unless you specify otherwise, this includes the a1-zone1-pri device zone and namespace. Optionally:
-
To create a cluster for an additional device zone, add the --zone X option, where X is a unique number for the new zone. clusterctl automatically creates the relevant zone and namespace.
For example, if you create a cluster for zone 2 with the following command, a zone and namespace named a1-zone2-pri will automatically be created along with the cluster:
$A1BASEDIR/bin/cluster/clusterctl create PrimaryZone2Cluster --zone 2
-
To create a cluster for a component where zones are not needed, such as Vision, add the --no-zone option. All the default namespaces are created except the zone namespace. See Namespaces in Unified Assurance Concepts for information about the default namespaces.
Creating Redundant Clusters
By default, clusterctl adds all available servers with roles to a single cluster. You can instead create multiple redundant clusters by specifying the hosts to add to each cluster in the create commands.
To create redundant clusters, as the root user:
-
Create the primary cluster by running the following command, replacing the example hosts with your hosts:
$A1BASEDIR/bin/cluster/clusterctl create <primary_cluster_name> --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
-
Create the redundant cluster by running the following command, replacing the example hosts with your hosts:
$A1BASEDIR/bin/cluster/clusterctl create <secondary_cluster_name> --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary
While creating the cluster, the a1-zone1-sec device zone and namespace are automatically created and added to the secondary cluster.
-
Combine the clusters into a redundant pair by running one of the following commands:
-
On a server in the primary or secondary cluster:
$A1BASEDIR/bin/cluster/clusterctl join --primaryCluster <PrimaryHostFQDN> --secondaryCluster <SecondaryHostFQDN>
-
On a server outside the cluster, add the --repo flag to specify the primary presentation server's FQDN:
$A1BASEDIR/bin/cluster/clusterctl join --primaryCluster <PrimaryHostFQDN> --secondaryCluster <SecondaryHostFQDN> --repo <PrimaryPresentationWebFQDN>
Tip:
Add the --debug option to the commands to show additional information about the cluster joining process.
-
Detaching Redundant Clusters
To remove a redundant pairing relationship between the cluster containing the current server and its redundant pair, run the following command as the root user from one of the servers in the cluster:
$A1BASEDIR/bin/cluster/clusterctl detach
The command automatically identifies which cluster pair to detach based on the cluster association of the server host where the command was run.
Updating the Helm Repository
On at least one primary server in the cluster, update the Helm repository by running the following commands as the assure1 user:
export WEBFQDN=<Primary Presentation Web FQDN>
a1helm repo update
Installing Helm Packages to Deploy Microservices
After setting up the cluster and updating the Helm repository, you are ready to deploy your microservices by installing Helm packages. You deploy multiple microservices in your cluster to accomplish a common goal as part of a microservice pipeline. The Prometheus and KEDA microservices are automatically deployed to the a1-monitoring namespace when you create the cluster.
You can install Helm packages to deploy microservices by using the command line or the Unified Assurance user interface.
Helm packages are installed as releases, which can have unique names. Oracle recommends following the default convention of giving the release the same name as the Helm chart. When installing each Helm chart, you define the location of the Docker registry and the namespace to install in. You can use default configurations, or set additional configurations during installation, depending on the options provided for each chart.
If you are using redundant clusters, you can configure some microservices for redundancy by deploying pairs of microservices on each cluster.
See the following topics for more information:
-
Managing Microservices for general information about deploying, updating, and undeploying microservices, and configuring microservice redundancy.
-
The documentation for each microservice for specific configuration requirements and options.
-
Understanding Microservice Pipelines in Unified Assurance Concepts for general information about Unified Assurance microservice pipelines.
-
Understanding the Event Pipeline in Unified Assurance Concepts for a description of a pipeline that includes multiple microservices
Customizing the Cluster Configuration File
You can optionally customize the configuration file that is used when creating clusters. You can do this before or after creating the cluster.
Making Customizations Before Creating a Cluster
Before creating a new cluster:
-
Update the $A1BASEDIR/etc/rke/cluster-tmpl.yml template file.
For example, if you want to change the maximum file size for the Vision ingress controller, locate the proxy-body-size setting under ingress, and update the value.
-
Create the cluster as described in Creating Clusters.
The clusterctl create command uses the customized file to create the cluster.
Making Customizations to an Existing Cluster
To make customizations in a cluster that already exists:
-
Update the $A1BASEDIR/etc/rke/cluster.yml file as needed.
-
From one of the servers in the cluster, run the following command as the root user:
clusterctl upgrade
Troubleshooting
Helm deployments and the associated Kubernetes pods, services, and other components can fail to initialize or crash unexpectedly.
To help troubleshoot these issues, you can run the following commands as the assure1 user:
-
To see information about all running pods, including pod names and namespaces that you can use in other commands:
a1k get pods --all-namespaces
-
To describe a pod to get events if it fails to start:
a1k describe pod <pod_name> -n <namespace>
where <pod_name> is the name of the pod you want to describe and <namespace> is the namespace the pod is running in. You can get these by running the a1k get pods command.
-
To get and tail logs of a running pod:
a1k logs <pod_name> -n <namespace> -f
-
To list all running microservices across all namespaces:
a1helm list --all-namespaces
-
To uninstall a microservice:
where <microservice_release_name> is the release name for the microservice you are uninstalling. You can get the exact name by running the a1helm list command.a1helm uninstall <microservice_release_name> -n <namespace>