Ciena MCP Poller Microservice¶
Overview¶
The Ciena MCP Poller microservice is created to poll topology, metrics and events data from Ciena MCP via API & Websocket using Bearer token for authentication. It collects data in regular intervals, normalizes it and writes it to topics from which graph-sink, metric-sink and event-sink will ingest the data. Topology is polled every 24 hours, metrics are polled every 15 minutes, and events are polled every 5 minutes.
Prerequisites¶
-
A microservices cluster must be setup. Refer to Microservice Cluster Setup.
-
Apache Pulsar must be installed. Refer to Apache Pulsar microservice.
-
The following core microservices must be installed:
-
Ciena MCP API URLs and Websocket URL.
-
Kubernetes secret should be created to pass the username and password. Refer to the Secret creation section.
Setup¶
su - assure1
export NAMESPACE=a1-zone1-pri
export WEBFQDN=<Primary Presentation Web FQDN>
a1helm install ciena-mcp-poller assure1/ciena-mcp-poller -n $NAMESPACE --set global.imageRegistry=$WEBFQDN
Default Configuration¶
Name | Value | Possible Values | Notes |
---|---|---|---|
LOG_LEVEL | DEBUG | FATAL, ERROR, WARN, INFO, DEBUG | Logging level used by application. |
PULSAR_STREAM | pulsar+ssl://pulsar-broker.a1-messaging.svc.cluster.local | Text, 255 characters | Apache Pulsar topic path. Topic at end of path may be any text value. |
STREAM_OUTPUT_METRIC | persistent://assure1/metric/sink | Text, 255 characters | Metric sink topic path. |
STREAM_OUTPUT_GRAPH | persistent://assure1/graph/sink | Text, 255 characters | Graph sink topic path. |
STREAM_OUTPUT_EVENT | persistent://assure1/event/sink | Text, 255 characters | Event sink topic path. |
METRIC_POLLING_INTERVAL | "15" | Integer | Time in minutes between polls of the metrics data. |
TOPOLOGY_TIMER | "00:00" | Text in Hours:Minutes format | Time in Hours : Minutes to poll topology data. |
METRIC_TYPES | "OCH-SPANLOSS,OCH-SPANLOSSMAX,OCH-SPANLOSSMIN" | List of metrics types or * | List of metrics types to be polled. Setting * collects all metrics. |
TOPOLOGY_WEBSOCKET_STREAM | "true" | "true" or "false" | Mode of live topology transaction collection. Either API or websocket. |
SECRET_NAME_OVERRIDE | "" | Text, 255 characters | Optional - Custom secret name |
STREAM_INPUT | "https://username@0.0.0.0:443,https://username@1.0.0.0:443" | Ciena / list of comma separated URL inside quotes | Comma seperarted URL of the ciena mcp server with the username |
SECRET_FILE_OVERRIDE | "" | Text, 255 characters | Optional - Custom secret filename |
Configurations can be changed by passing the values to the a1helm install
prefixed with the configData parent key.
Example of setting the log level to INFO¶
a1helm install ... --set configData.LOG_LEVEL=INFO
Secret creation¶
The password should be a base64 encoded string. Both the secret name and secret file name can be overridden using the config options.
Example of creating a secret¶
a1k create secret generic ciena-mcp-credentials --from-literal=password=<base64EncodedPassword> -n <namespace>
Topology Polling¶
Ciena MCP provides topology data as a historical transaction log of topology entries. Upon initial startup poller will read historical transaction logs and rebuild topology in Unified Assurance, after which it will switch to topology live-streaming (All Ciena topology changes will be reflected live in Unified Assurance).
If websocket streaming is set to false, API polling will occur every 24 hours, where changes from last day will be reflected in Assure1.
Note
As per Ciena docs, a Ciena MCP system will persist authorization token by default for a week.
If the Ciena system will be down longer than this period of time, previously polled Ciena topology data in Unified Assurance needs to be deleted manually leveraging _source
Vertex/Edge property in graph database and poller microservice restarted.
Since the Ciena system would lose, all "transaction index tracking" capabilities topology needs to be rebuilt again from the transaction log (Poller will do it automatically upon recognizing invalidated token).
Metric Polling¶
Metrics are collected via API calls in a configurable interval. There is some delay in the availability of the historical metric data in the API server. Hence, currently the polling will start after 3 minutes of the fixed polling time. The collected metrics will be sent to the metric sink topic via pulsar.
Event Polling¶
Events are collected via websocket stream. The collected events will be sent to the event-sink topic via pulsar.
Ciena MCP Redundancy¶
Multiple Ciena MCP servers can be configured to achieve redundancy, not to be confused with Microservice redundancy. The microservice can be configured to establish a connection with the redundant Ciena MCP server in the scenario where the primary one is down. To avail of Ciena MCP redundancy in the microservice, you should specify two comma separated URIs in the STREAM_INPUT configuration option.
Example of configuration to avail of Ciena MCP redundancy¶
a1helm install ... --set configData.STREAM_INPUT="https://username@0.0.0.0:443,https://username@1.0.0.0:443"
Microservice self-metrics¶
The Ciena MCP Poller microservice exposes the following self-metrics to Prometheus.
Metric Name | Type | Description |
---|---|---|
processing_time_of_all_metrics | Gauge | Time taken to poll and process metrics data per cycle in minutes |
number_of_metrics_added_per_cycle | Gauge | Number of metrics added per polling cycle |
processing_time_of_events_in_seconds | Gauge | Time taken to process event in seconds |
polling_time_of_topology_in_minutes | Gauge | Time taken to poll, process and send all topology data |
topology_total_api_calls | Gauge | Number of API calls made to collect topology |
number_of_devices_processed_per_polling | Gauge | Number of devices added as part per topology collection |
Microservice redundancy¶
Redundancy in the Ciena MCP Poller microservice controls which of the two microservices in a redundant pair is considered active to run topology, event and metric polling.
Info
Redundancy is disabled by default.
Example of enabling redundancy¶
a1helm install ... --set redundancy.enabled=true