Microservice Cluster Setup

Before you can deploy microservices, you must set up Kubernetes clusters. To minimize the complexities of creating and maintaining Kubernetes clusters, Unified Assurance provides the clusterctl application, which operates as a frontend to the Rancher Kubernetes Engine (RKE) command line tool and provides the opinionated setup configuration.

Before setting up a cluster:

Setting up a microservice cluster involves the following steps:

  1. Setting Environment Variables

  2. Installing Roles on Servers

  3. Creating SSH Keys

  4. Confirming the SELinux Configuration

  5. Creating Clusters

  6. Updating the Helm Repository

  7. Installing Helm Packages to Deploy Microservices

You can optionally customize the cluster configuration file as described in Customizing the Cluster Configuration File.

See Troubleshooting for tips on using Kubernetes commands to get information about your Kubernetes pods and microservices.

Setting Environment Variables

Most of the commands for setting up clusters must be run as the root user. Before running the commands, as the root user, set the LD_LIBRARY_PATH environment variable by running either of the following commands:

Installing Roles on Servers

You must install the Cluster.Master and Cluster.Worker roles on one or more servers. For single-server development systems, install both roles. For production systems, each data center or availability zone should have at least three servers with the Cluster.Master role. These servers can also have the Cluster.Worker role, depending on the resources available.

Run the following commands as the root user to install the roles:

Creating SSH Keys

You enable access between servers in the same Unified Assurance instance by creating SSH keys.

On each server other than the primary presentation server, run the following command as the assure1 user:

$A1BASEDIR/bin/CreateSSLCertificate --Type SSH

Confirming the SELinux Configuration

When SELinux is enabled, the $A1BASEDIR/var/rke directory is required with context type rke_opt_t.

Confirm that it exists with the correct context by running the following command:

ls -ldZ var/rke

The output should include rke_opt_t, similar to the following:

drwxr-xr-x. 4 root root unconfined_u:object_r:rke_opt_t:s0 <date_and_time> var/rke

The directory may have been removed by the clusterctl clean command when recreating a misconfigured cluster.

If the directory is missing or the context type does not match, recreate it and reset the configuration by running the following commands as the root user:

mkdir $A1BASEDIR/var/rke
restorecon -R -v $A1BASEDIR/var/rke 

Creating Clusters

You use the clusterctl command line application to manage clusters. The application determines the servers in each cluster by the cluster master and worker roles, and whether those servers have been associated with an existing cluster.

To create a cluster, run the following command as the root user:

$A1BASEDIR/bin/cluster/clusterctl create <cluster_name>

In the command, <cluster_name> is the name of the cluster. Use a name relevant to the servers being added to the cluster. For example:

The default namespaces are automatically created and added to the cluster. Unless you specify otherwise, this includes the a1-zone1-pri device zone and namespace. Optionally:

Creating Redundant Clusters

By default, clusterctl adds all available servers with roles to a single cluster. You can instead create multiple redundant clusters by specifying the hosts to add to each cluster in the create commands.

To create redundant clusters, as the root user:

  1. Create the primary cluster by running the following command, replacing the example hosts with your hosts:

    $A1BASEDIR/bin/cluster/clusterctl create <primary_cluster_name> --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
    
  2. Create the redundant cluster by running the following command, replacing the example hosts with your hosts:

    $A1BASEDIR/bin/cluster/clusterctl create <secondary_cluster_name> --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary
    

    While creating the cluster, the a1-zone1-sec device zone and namespace are automatically created and added to the secondary cluster.

  3. Combine the clusters into a redundant pair by running one of the following commands:

    • On a server in the primary or secondary cluster:

      $A1BASEDIR/bin/cluster/clusterctl join --primaryCluster <PrimaryHostFQDN> --secondaryCluster <SecondaryHostFQDN>
      
    • On a server outside the cluster, add the --repo flag to specify the primary presentation server's FQDN:

      $A1BASEDIR/bin/cluster/clusterctl join --primaryCluster <PrimaryHostFQDN> --secondaryCluster <SecondaryHostFQDN> --repo <PrimaryPresentationWebFQDN>
      

    Tip:

    Add the --debug option to the commands to show additional information about the cluster joining process.

Detaching Redundant Clusters

To remove a redundant pairing relationship between the cluster containing the current server and its redundant pair, run the following command as the root user from one of the servers in the cluster:

$A1BASEDIR/bin/cluster/clusterctl detach

The command automatically identifies which cluster pair to detach based on the cluster association of the server host where the command was run.

Updating the Helm Repository

On at least one primary server in the cluster, update the Helm repository by running the following commands as the assure1 user:

export WEBFQDN=<Primary Presentation Web FQDN> 
a1helm repo update 

Installing Helm Packages to Deploy Microservices

After setting up the cluster and updating the Helm repository, you are ready to deploy your microservices by installing Helm packages. You deploy multiple microservices in your cluster to accomplish a common goal as part of a microservice pipeline. The Prometheus and KEDA microservices are automatically deployed to the a1-monitoring namespace when you create the cluster.

You can install Helm packages to deploy microservices by using the command line or the Unified Assurance user interface.

Helm packages are installed as releases, which can have unique names. Oracle recommends following the default convention of giving the release the same name as the Helm chart. When installing each Helm chart, you define the location of the Docker registry and the namespace to install in. You can use default configurations, or set additional configurations during installation, depending on the options provided for each chart.

If you are using redundant clusters, you can configure some microservices for redundancy by deploying pairs of microservices on each cluster.

See the following topics for more information:

Customizing the Cluster Configuration File

You can optionally customize the configuration file that is used when creating clusters. You can do this before or after creating the cluster.

Making Customizations Before Creating a Cluster

Before creating a new cluster:

  1. Update the $A1BASEDIR/etc/rke/cluster-tmpl.yml template file.

    For example, if you want to change the maximum file size for the Vision ingress controller, locate the proxy-body-size setting under ingress, and update the value.

  2. Create the cluster as described in Creating Clusters.

    The clusterctl create command uses the customized file to create the cluster.

Making Customizations to an Existing Cluster

To make customizations in a cluster that already exists:

  1. Update the $A1BASEDIR/etc/rke/cluster.yml file as needed.

  2. From one of the servers in the cluster, run the following command as the root user:

    clusterctl upgrade
    

Troubleshooting

Helm deployments and the associated Kubernetes pods, services, and other components can fail to initialize or crash unexpectedly.

To help troubleshoot these issues, you can run the following commands as the assure1 user: