Managing Microservices
You can manage microservices in the Unified Assurance user interface (UI) or using the command line.
To access microservices in the UI, from the Configuration menu, select Microservices, then select either Installed to see which microservices are deployed, or Helmcharts to see which microservices are available to deploy.
When managing microservices in the command line, a1k is an alias for the standard kubectl commands and a1helm is an alias for the standard helm commands. You must run a1k and a1helm commands as the assure1 user. See the Helm and Kubernetes documentation for details about the commands and options beyond what is described in this topic.
Setting Up Authentication
Some microservices integrate with and collect data from external systems, such as Element Management Systems (EMS), messaging buses, and external APIs. These microservices require additional authentication settings to enable this external communication, which can include OAuth, SSL, and Kubernetes secret configuration.
The documentation for each of the following microservices specifies the additional authentication settings required:
-
ActiveMQ Bridge: If you are using custom certificates or credentials, generate file-based Kubernetes secrets.
-
Ciena MCP Poller: Generate a Kubernetes secret for the Ciena MCP password.
-
Cisco Meraki Poller: Generate a Kubernetes secret for the Cisco Meraki API token.
-
CORBA Collector: When your EMS uses SSL, generate a Kubernetes secret for the SSL certificate bundle of your EMS and specify it in a configuration parameter when you deploy the microservice. The microservice also has a configuration setting for enabling and disabling SSL. It is enabled by default.
-
Kafka Bridge: Generate Kubernetes secrets depending on the authentication mechanism of the Kafka server you are connecting to, and different configuration settings while deploying to support these secrets.
-
Mist Poller: Generate a Kubernetes secret for the Juniper Mist API access token.
-
Netapp Poller: Generate a file-based Kubernetes secret. You can use multiple files to create secrets for multiple Netapp instances.
-
ServiceNow Adapter: Generate a Kubernetes secret for the ServiceNow username and password.
The documentation for each microservice provides the expected secret names, keys, and a1k commands.
Deploying a Microservice
You deploy microservices onto the Kubernetes cluster by installing Helm charts. You can do this by using the command line or by using the UI. Both options let you change configuration settings from their default values during deployment.
Common Configuration Options
The following configuration options apply to many or all microservices:
-
Log levels: You can set the log level for all microservices in the configuration parameters. By default, all microservice have the log level set to INFO. You can change the log level when deploying the microservice in the command line or UI.
-
Autoscaling: You can use Kubernetes Event-driven Autoscaler (KEDA) to automatically scale microservices. This is enabled by default for some microservices. See Configuring Autoscaling for more information, including the list of microservices that support autoscaling.
-
Redundancy: You can enable redundancy to create redundant pairs of the microservice in case one fails. See Configuring Microservice Redundancy for more information, including the list of microservices that support microservice-level redundancy.
Configuring Autoscaling
Autoscaling with KEDA lets you automatically increase or decrease allocated computational resources (the number of running pods) as the microservice workload changes. If a microservice has a high workload, Kubernetes assigns enough work units (pods) to handle the increased workload. When the workload decreases, Kubernetes scales down the pods to keep overall resource use optimal and efficient.
KEDA is deployed automatically as a microservice with multiple pods in the a1-monitoring namespace when you create the microservice cluster.
Autoscaling is enabled by default for the following microservices:
The following microservices also support autoscaling, but it is disabled by default:
-
Kafka Bridge: Autoscaling is only relevant when you are using an internal Pulsar topic as the input for the microservice.
-
SNMP Poller: Because of how autoscaling is dynamically adjusted at runtime for this microservice, you must configure the maximum replica count according to the number of devices and worker concurrency in your environment. Autoscaling is disabled by default so that you can make this calculation before deploying the microservice.
When you deploy a microservice, you can optionally disable or enable autoscaling or change other configuration values. The commands and UI procedure in Deploying a Microservice by Using the Command Line and Deploying a Microservice by Using the UI provide examples of setting parameters.
The following default configuration parameters are set in the Helm charts for all microservices that support autoscaling. The microservices also have additional unique autoscaling parameters, which are described in the documentation for each microservice.
Name | Value | Possible Values | Notes |
---|---|---|---|
enabled | true | true, false | Whether autoscaling is enabled (true) or not (false). Enabled by default. |
pollingInterval | 5 | Integer | The interval in seconds at which each metric value is checked against the threshold. If any metrics surpass the threshold, replicas are scaled. |
cooldownPeriod | 300 | Integer | The period in seconds to wait before scaling the resources back to the minimum number of replicas. |
minReplicaCount | 1 | Integer | The minimum number of replicas when the resources are scaled down. |
maxReplicaCount | 20 | Integer | The maximum number of replicas when the resources are scaled up. |
Configuring Microservice Redundancy
To support redundancy, you can deploy a redundant pair of microservices on separate clusters. You enable redundancy when deploying the microservice, and you can optionally change the default configurations. You deploy each microservice separately, enabling redundancy on both; deploying one with redundancy enabled does not automatically deploy its redundant pair.
You can configure redundancy for the following microservices:
The following table describes the default redundancy configurations:
Name | Value | Possible Values | Notes |
---|---|---|---|
REDUNDANCY_INIT_DELAY | 20s | Integer + Text (ns, us (or µs), ms, s, m, h) | Used only for the SNMP Poller microservice. At startup, the amount of time to wait for the primary microservice to come up before initiating redundancy. |
REDUNDANCY_POLL_PERIOD | 5 | Integer + Text (ns, us (or µs), ms, s, m, h) | The amount of time between status checks from the secondary microservice to the primary microservice. For most microservices, this is time in seconds. For the SNMP Poller microservice, this value must include a unit similar to REDUNDANCY_INIT_DELAY. The default is 5s. |
REDUNDANCY_FAILOVER_THRESHOLD | 4 | Integer greater than 0 | The number of times the primary microservice must fail checks before the secondary microservice becomes active. |
REDUNDANCY_FALLBACK_THRESHOLD | 1 | Integer greater than 0 | The number of times the primary microservice must succeed checks before the secondary microservice becomes inactive. |
Note:
Microservice redundancy is not the same as server, cluster, or database redundancy. Microservice redundancy refers specifically to redundant pairs of deployed microservices in an active-passive configuration. For microservices that do not support this configuration, you can deploy separate active instances of the same microservice to redundant clusters.
Naming Conventions for Microservice Releases, Helmcharts, and Namespaces
When you deploy the microservice, you specify several names:
-
A release name, which identifies the deployment instance.
-
The name of the Helm chart, which matches the microservice name.
-
The namespace to deploy to.
About Microservice Release Names
Oracle recommends keeping the release name and Helm chart name the same. However, there are scenarios where you might need to deploy multiple instances of the same microservice in the same cluster. In these cases, each instance of the microservice would require a unique release name.
For example:
-
Some collector microservices, such as Trap Collector, must be deployed to the specific node where your traps are being sent. If you have traps coming to multiple nodes in the same cluster, you would deploy multiple uniquely-named instances of the microservice, each pinned to a different node.
-
For poller microservices, such as Netapp Poller or Ciena MCP Poller, you might have multiple instances of the system being polled from the same cluster, requiring multiple uniquely-named instances of the microservice.
About Microservice Helm Chart Names
You cannot change the Helm chart name and path. This is always assure1/<microservice-name>, where <microservice-name> is the lowercase, hyphenated name of the microservice. For example, assure1/trap-collector.
The documentation for each microservice includes deployment commands with the correct Helm chart name. You can also see the Helm chart name in the Helmcharts UI, at the top of the card for each microservice.
About Microservice Namespaces
The default Unified Assurance namespaces are created automatically when you create a cluster. See Namespaces in Unified Assurance Concepts for details.
You deploy most microservices to the primary zoned namespace for the cluster. In environments with redundant clusters, you can deploy microservices to the zoned namespace on the secondary cluster.
The naming convention for zoned namespaces is a1-zone<N>-<type>, where:
-
<N> is the ID of the device zone for the cluster.
-
<type> is pri for a primary cluster or sec for a secondary redundant cluster.
Some microservices, such as Prometheus Metrics Processor, Pulsar, and Vision, are deployed to other namespaces, such as a1-monitoring, a1-messaging, and a1-vision. The documentation for each microservice specifies which type of namespace to use.
Deploying a Microservice by Using the Command Line
When you deploy a microservice by using the command line you can use the default configuration settings, or you can change them by specifying command options.
Note:
This topic provides the general deployment commands and some examples of changing configuration settings. Although some configuration settings, like log level, appear for every microservice, most settings are unique to each microservice. Some microservices also require additional command options. Review the documentation for each microservice for the relevant commands and settings.
Generally, to set appropriate environment variables and deploy a microservice, you run the following commands as the assure1 user:
export NAMESPACE=<namespace>
export WEBFQDN=<presentation_server_FQDN>
a1helm install <microservice-release-name> assure1/<microservice-name> -n $NAMESPACE --set global.imageRegistry=$WEBFQDN
In the command:
-
<namespace> is the namespace you are deploying the microservice in.
-
<presentation_server_FQDN> is the fully-qualified domain name (FQDN) of the presentation server to deploy the microservice on.
-
<microservice-name> is the name of the microservice you are deploying. This is also the name of the Helm chart for the microservice.
-
<microservice-release-name> is the release name for the microservice deployment. In most cases, use the same name as the microservice, but you can use different values if you need to deploy multiple instances of a microservice in the same cluster.
In multi-server environments, Kubernetes dynamically handles the network routing to nodes for most microservices. However, the following microservices require you to constrain their pod to a single node:
-
CORBA Collector
-
Flow Collector
-
Syslog Collector
-
Trap Collector
-
Trap Forwarder
For these, you also set the NODEFQDN environment variable to the FQDN of the target node and add --set nodeSelector."kubernetes.io/hostname"=$NODEFQDN to the command:
export NAMESPACE=<namespace>
export WEBFQDN=<presentation_server_FQDN>
export NODEFQDN=<target_node_FQDN>
a1helm install <microservice-name> assure1/<microservice-name> -n $NAMESPACE --set global.imageRegistry=$WEBFQDN --set nodeSelector."kubernetes\.io/hostname"=$NODEFQDN
You can optionally change default configurations when deploying a microservice by adding --set <configuration_parameter>=<parameter_value> to the command. For example:
-
To change the log level to WARN, run the following command as the assure1 user:
a1helm install <microservice-name> assure1/<microservice-name> -n $NAMESPACE --set global.imageRegistry=$WEBFQDN --set configData.LOG_LEVEL=WARN
-
To enable redundancy, run the following command as the assure1 user:
a1helm install <microservice-name> assure1/<microservice-name> -n $NAMESPACE --set global.imageRegistry=$WEBFQDN --set redundancy.enabled=true
-
To disable autoscaling, run the following command as the assure1 user:
a1helm install <microservice-name> assure1/<microservice-name> -n $NAMESPACE --set global.imageRegistry=$WEBFQDN --set autoscaling.enabled=false
Deploying a Microservice by Using the UI
When you deploy microservices by using the UI, you can use the default configuration settings, or you can change them by editing the configuration file in the UI.
To deploy a microservice by using the UI:
-
Check that the prerequisites for the microservice have been met.
The documentation for each microservice lists the prerequisites, but at a minimum, a microservice cluster must be set up, and for most, the Apache Pulsar microservice must be deployed. Some poller microservices also require additional authentication configurations which must be set up using the command line.
-
In a browser, log in to the Unified Assurance UI. Your user group's role must have all microservice package permissions.
-
From the Configuration menu, select Microservices, then select Helmcharts.
-
Click on the card for the microservice you want to deploy.
-
Click Deploy.
The Deploy Settings window appears.
-
From the Cluster and Namespace menus, select a cluster and namespace to deploy the microservice on.
-
In the Values area, set the following parameters:
-
Under global, set imageRegistry to the fully-qualified domain name (FQDN) of the presentation server to deploy the microservice on.
-
For microservices that require the pod to be constrained to a single node, in the nodeSelector brackets, add "kubernetes.io/hostname"=<target_node_FQDN>.
-
Set or change other parameters as needed for your deployment. For example:
-
To enable redundancy, under redundancy, set enabled to true.
-
To change logging levels, under ConfigData, change the value of LOG_LEVEL.
-
To disable autoscaling, under autoscaling, set enabled to false.
-
-
-
Click the Changes tab and review your changes.
-
Optionally, update the value in Deploy Release Name. The default is the same name as the Helm chart.
-
Click Start.
The microservice is deployed and a confirmation dialog appears.
-
Click OK to dismiss the confirmation.
Updating a Microservice
You cannot make configuration updates to deployed microservices. You must undeploy the microservice and redeploy it using new values. You can do this by using the command line or the UI. See Undeploying a Microservice for more information.
Note:
The helm upgrade and a1helm upgrade commands are not supported for making configuration updates to deployed microservices.
Restarting a Microservice Pod
Occasionally, you may need to restart the pod for a microservice. For example, if you make updates to associated rules files, you must restart the microservice for the new rules to take effect.
To restart a pod, run the following command as the assure1 user:
a1k delete pod <pod_name> -n <namespace>
When you delete a pod, Kubernetes restarts it automatically.
If you do not know the pod name or namespace, run the following command as the assure1 user:
a1k get pods --all-namespaces
Undeploying a Microservice
You can use the command line or the UI to undeploy a microservice.
Undeploying a Microservice by Using the Command Line
Run the following command as the assure1 user:
a1helm uninstall <microservice-release-name> -n $NAMESPACE
In the command, <microservice-release-name> is the release name of the microservice you are undeploying. This is generally the same as the microservice name, unless you specified a different release name when deploying the microservice.
Undeploying a Microservice by Using the UI
-
In a browser, log in to the Unified Assurance UI. Your user group's role must have all microservice package permissions.
-
From the Configuration menu, select Microservices, then Installed.
-
On the card for the microservice you want to undeploy, click Delete.
-
In the Delete Helmchart confirmation dialog, click Yes.
The microservice is undeployed and a confirmation dialog appears.
-
Click OK to dismiss the confirmation.
Monitoring Microservices
You can monitor microservices in the UI under Workloads. From the Configuration menu, select Microservices, then Workloads. The information is shown using standard Kubernetes workload view types. See Workloads in Unified Assurance User's Guide for more information. Also see the Kubernetes Workloads documentation: https://kubernetes.io/docs/concepts/workloads.
In addition to the workloads views, you can monitor health and performance metrics scraped by Prometheus for all microservices. The Prometheus stack is automatically deployed as a microservice in the a1monitoring namespace when you create the microservice cluster. The Prometheus microservice scrapes the metrics, optionally sends them to the Prometheus Metrics Processor microservice for filtering, and adds them to the Metrics database. You can then leverage Unified Assurance's metrics monitoring capabilities and built-in Grafana dashboards to monitor the Prometheus metrics for your microservices.
The following microservices expose additional metrics to Prometheus:
See the documentation for each microservice for information about the metrics they expose to Prometheus.