Ciena MCP Poller

The Ciena MCP Poller microservice polls events, topology, and metrics data from Ciena MCP devices by API and WebSocket using Bearer token for authentication. It collects data in regular intervals, normalizes it, and writes it to topics to which the Graph Sink, Metric Sink, and Event Sink microservices are subscribed.

Topology is polled every 24 hours, metrics are polled every 15 minutes, and events are polled every five minutes.

This microservice is part of the Event, Topology, and Metric microservice pipelines. See Understanding Microservice Pipelines in Unified Assurance Concepts for conceptual information about microservice pipelines.

You can enable redundancy for this microservice when you deploy it. See Configuring Microservice Redundancy for general information.

This microservice provides additional Prometheus monitoring metrics. See Ciena MCP Poller Self-Monitoring Metrics.

Ciena MCP Poller Prerequisites

Before deploying the microservice, confirm that the following prerequisites are met:

  1. A microservice cluster is set up. See Microservice Cluster Setup.

  2. The following microservices are deployed:

  3. You know the Ciena MCP API URLs and WebSocket URL for your system.

  4. You have created a Kubernetes secret containing the Ciena MCP password by running the following command as the assure1 user:

    a1k create secret generic ciena-mcp-credentials --from-literal=password=<base64EncodedPassword> -n <namespace>
    

    You can optionally use a different secret name and file. If you do so, set them in the configData.SECRET_FILE_OVERRIDE and configData.SECRET_NAME_OVERRIDE configuration parameters when you deploy the microservice.

Deploying Ciena MCP Poller

To deploy the microservice, run the following commands:

su - assure1
export NAMESPACE=<namespace>
export WEBFQDN=<WebFQDN>
a1helm install <microservice-release-name> assure1/ciena-mcp-poller -n $NAMESPACE --set global.imageRegistry=$WEBFQDN

In the commands:

You can also use the Unified Assurance UI to deploy microservices. See Deploying a Microservice by Using the UI for more information.

Changing Ciena MCP Poller Configuration Parameters

When running the install command, you can optionally change default configuration parameter values by including them in the command with additional --set arguments. You can add as many additional --set arguments as you need.

For example:

Default Ciena MCP Poller Configuration

The following table describes the default configuration parameters found in the Helm chart under configData for the microservice.

Name Default Value Possible Values Notes
LOG_LEVEL INFO FATAL, ERROR, WARN, INFO, DEBUG Logging level used by application.
PULSAR_STREAM pulsar+ssl://pulsar-broker.a1-messaging.svc.cluster.local Text, 255 characters Apache Pulsar topic path. Topic at end of path may be any text value.
PULSAR_PORT "6651" Integer The Pulsar port.
PULSAR_CERT_TRUST /certs/a1/BundleCA.crt String The SSL CA Bundle for Pulsar.
PULSAR_CERT_PATH /certs/a1/User-assure1.crt String The assure1 user's SSL certificate for Pulsar.
PULSAR_CERT_KEY /certs/a1/User-assure1.key String The assure1 user's SSL certificate key for Pulsar.
TLS_CA /certs/a1/BundleCA.crt String The TLS CA Bundle.
TLS_CERT /certs/a1/User-assure1.crt String The assure1 user's TLS certificate.
TLS_CERT_KEY /certs/a1/User-assure1.key String The assure1 user's TLS certificate key.
REDUNDANCY_POLL_PERIOD 5 Integer The number of seconds between status checks from the secondary microservice to the primary microservice.
REDUNDANCY_FAILOVER_THRESHOLD 4 Integer The number of times the primary microservice must fail checks before the secondary microservice becomes active.
REDUNDANCY_FALLBACK_THRESHOLD 1 Integer The number of times the primary microservice must succeed checks before the secondary microservice becomes inactive.
STREAM_OUTPUT_METRIC persistent://assure1/metric/sink Text, 255 characters Metric sink topic path.
STREAM_OUTPUT_GRAPH persistent://assure1/graph/sink Text, 255 characters Graph sink topic path.
STREAM_OUTPUT_EVENT persistent://assure1/event/sink Text, 255 characters Event sink topic path.
METRIC_INTERVAL "15" Integer Time in minutes between polls of the metrics data.
METRIC_DELAY_SECONDS "180" Integer The number of seconds to delay
TOPOLOGY_TIMER "15:00" Text in Hours:Minutes format Time in Hours : Minutes to poll topology data.
METRIC_TYPES "OCH-SPANLOSS,OCH-SPANLOSSMAX,OCH-SPANLOSSMIN" List of metrics types or * List of metrics types to be polled. Setting * collects all metrics.
TOPOLOGY_WEBSOCKET_STREAM "true" "true" or "false" Mode of live topology transaction collection. Either API or websocket.
STREAM_INPUT "https://username@0.0.0.0:443,https://username@1.0.0.0:443" Ciena / list of comma separated URL inside quotes Comma separated URL of the Ciena MCP server with the username
STREAM_RETRY_LIMIT 3 Integer Number of times to retry connecting to the stream.
SECRET_NAME_OVERRIDE "" Text, 255 characters Optional - Custom secret name
SECRET_FILE_OVERRIDE "" Text, 255 characters Optional - Custom secret filename

Topology Polling

Ciena MCP provides topology data as a historical transaction log of topology entries. Upon initial startup poller will read historical transaction logs and rebuild topology in Unified Assurance, after which it will switch to topology live-streaming (All Ciena topology changes will be reflected live in Unified Assurance).

When websocket streaming is set to false, API polling occurs every 24 hours. Changes from last day appear in Unified Assurance.

Note:

As described in the Ciena documentation, a Ciena MCP system persists authorization token by default for a week. If the Ciena system will be down for longer than a week, you must manually delete previously polled Ciena topology data from Unified Assurance by leveraging the _source Vertex/Edge property in the Graph database, and restart the Ciena MCP Poller microservice.

Because the Ciena system would lose all transaction index tracking capabilities, the microservice needs to rebuild the topology from the transaction log. It does this automatically when it recognizes the expired token.

Metric Polling

Metrics are collected by API calls in a configurable interval. There is some delay in the availability of the historical metric data in the API server. By default, polling starts after three minutes of the fixed polling time. The collected metrics are sent to the Pulsar topic for Metric Sink.

Event Polling

Events are collected by a websocket stream. The collected events are sent to the Pulsar topic for Event Sink.

Supporting Ciena MCP Server Redundancy

You can configure multiple redundant Ciena MCP servers and the Ciena MCP Poller microservice can establish a connection with the redundant Ciena MCP server when the primary Ciena MCP server is down.

Note:

This is different from microservice redundancy, where a redundant pair of microservices is deployed.

To support Ciena MCP server redundancy in the microservice, you configure both servers as comma-separated URIs in the STREAM_INPUT configuration parameter for the Ciena MCP Poller microservice.

For example, when installing the microservice, you run the following command:

a1helm install ciena-mcp-poller assure1/ciena-mcp-poller -n $NAMESPACE --set global.imageRegistry=$WEBFQDN --set configData.STREAM_INPUT="<primary_uri>,<secondary_uri>"

Ciena MCP Poller Self-Monitoring Metrics

The Ciena MCP Poller microservice exposes the self-monitoring metrics described in the following table to Prometheus.

Metric Name Type Description
processing_time_of_all_metrics Gauge Time taken to poll and process metrics data per cycle in minutes
number_of_metrics_added_per_cycle Gauge Number of metrics added per polling cycle
processing_time_of_events_in_seconds Gauge Time taken to process event in seconds
polling_time_of_topology_in_minutes Gauge Time taken to poll, process and send all topology data
topology_total_api_calls Gauge Number of API calls made to collect topology
number_of_devices_processed_per_polling Gauge Number of devices added as part per topology collection

Note:

Metric names in the database include a prefix that indicates the service that inserted them. The prefix is prom_ for metrics inserted by Prometheus. For example, processing_time_of_all_metrics is stored as prom_processing_time_of_all_metrics in the database.