Migrating the Historical Database

Migrating your Oracle Communications Unified Assurance Historical database servers from Elasticsearch to OpenSearch involves performing the following tasks:

  1. Upgrading and Updating

  2. Migrating Indexes

  3. Manual Migration Tasks

  4. Finalizing the Migration

In a redundant environment, perform all of the tasks for the primary server, and then repeat them for the secondary server.

Upgrading and Updating

The upgrade or update tasks required depend on your starting version. The following table lists the supported migration paths for each version and interim steps. Because downtime is required for each major version upgrade and operating system upgrade, you will need to schedule a maintenance window.

Current Version Version 5 Step Operating System Upgrade Version 6 Step Documentation
4.87 (on Linux 6 or 7) Upgrade to 5.5.24 Upgrade to Linux 8 Upgrade to 6.1 Upgrading from Assure1 to Unified Assurance
5.x (Linux 7) Update to 5.5.24 Upgrade to Linux 8 Upgrade to 6.1 Upgrading from Assure1 to Unified Assurance
6.0.x (Linux 7) N/A Upgrade to Linux 8 Update to 6.1 Upgrading Unified Assurance to a New Linux System or Upgrading Unified Assurance on an Existing Linux System
6.0.4 or higher (Linux 8) N/A N/A Update to 6.1 Updating Unified Assurance

When you have performed all upgrades and updates and have Unified Assurance version 6.1 on your system, review the information in About Automatically Migrating Artifacts.

About Automatically Migrating Artifacts

When you run Package update at the end of updating your Historical database servers to the latest version of Unified Assurance, the command also runs AnalyticsWizard to automatically migrate artifacts from Elasticsearch to OpenSearch.

AnalyticsWizard does the following:

You can monitor or review the migration progress by tailing the following log file:

$A1BASEDIR/logs/MigrateHistoricalObjects.log

For any failures, you can find the Elasticsearch export and OpenSearch import files in the following directory:

$A1BASEDIR/tmp/migration/

Migrating Indexes

After the package update process that automatically migrates artifacts is complete, you can migrate your Historical database indexes.

  1. If you are migrating Flow Analytics indexes, confirm that you have deployed the Flow Collector microservice. You can check this in the UI by selecting the Configuration menu, then Microservices, and then Installed.

    If it is not deployed, deploy it now. See Deploying a Microservice by Using the UI and Flow Collector in Unified Assurance Implementation Guide for information.

  2. In the command line of the Historical database server, run MigrateHistoricalData to either migrate an entire index or migrate in time-based increments:

    • To migrate an entire index:

      $A1BASEDIR/bin/historical/MigrateHistoricalData --Index <index_pattern>
      

      where <index_pattern> is the index pattern to migrate. For Event Analytics, use eventanalytics*. For Flow Analytics, use network-*.

      Caution:

      If you do not specify an index pattern, the utility will iterate through all index patterns, including logs. This requires significant storage space and memory. Oracle does not recommend this.

    • To migrate a time-based increment, starting with the most recent increment and moving back in time:

      $A1BASEDIR/bin/historical/MigrateHistoricalData --Index <index_pattern> --From <from_date> --To <to_date>
      

      where:

      • <from_date> is the date of the oldest document in the time-based increment to migrate, in YYYY-MM-DD format.

      • <to_date> is the date of the newest document in the time-based increment to migrate, in YYYY-MM-DD format.

      If you only set <from_date>, then <to_date> defaults to the current date. If you only set <to_date>, then <from_date> defaults to 0, representing the oldest possible date.

      The time ranges are exclusive. This means that if you set the range to --From 2025-03-10 --To 2025-04-10, indexes starting at 00:00 on March 11th to 24:59 on April 9th will be migrated.

  3. Monitor the migration progress by tailing the following log file:

    $A1BASEDIR/logs/MigrateHistoricalData.log
    
  4. Optionally, if you are migrating data incrementally and want to save disk space, you can pause between increments and delete the successfully migrated data from Elasticsearch.

The index migration is complete. Proceed to Manual Migration Tasks.

Manual Migration Tasks

You must manually recreate any of the following Elasticsearch components that apply in your environment:

Recreating Lens or Canvas Dashboards

Unified Assurance included default Kibana dashboards for Elasticsearch. When you update to Unified Assurance 6.1, the same default dashboards are automatically included for OpenSearch.

In Kibana, you could also edit or create new dashboards, and optionally create visualizations using the Kibana Lens or Canvas features. AnalyticsWizard automatically migrates most of your custom dashboards, but Lens or Canvas visualizations are not directly compatible with OpenSearch Dashboards. You must manually recreate Lens or Canvas dashboards in OpenSearch.

OpenSearch dashboards offer a similar drag and drop visualization building functionality using VisBuilder. See Building data visualizations in the OpenSearch Dashboards documentation for more information.

Recreating Watches

In Elasticsearch, the Watcher feature sent detected anomalies to the Event database with default watches. You could also create custom watches.

OpenSearch uses two separate components to reproduce watches: notification channels and monitors. Setting up notification channels lets you reuse the same channel (such as an email group or the webhook for the Event database) across multiple monitors.

When you update to Unified Assurance 6.1, the default notification channel and monitor that send detected anomalies to the Event database are set up automatically. Because of the change in functionality, you must recreate any custom watches.

You can reproduce the following Elasticsearch Watcher action types with OpenSearch notification channels:

The following Elasticsearch Watcher action types are not supported in OpenSearch:

OpenSearch supports some additional notification channels. See Notifications in the OpenSearch documentation for details.

To set up Watcher equivalents:

  1. Review your Elasticsearch watch configuration:

    1. Go to the following location:

      https://<WebFQDN>/go/k/app/management/insightsAndAlerting/watcher/watches
      

      The Elasticsearch Stack Management UI opens to the Watcher page.

    2. Select the Edit button next to your custom watch.

    3. Copy the JSON data or make note of the configuration options.

  2. Create an OpenSearch notification channel equivalent to the Elasticsearch watch action:

    1. From the main Unified Assurance navigation menu, select Analytics, then Events, then Administration, and then Management.

    2. Click Notifications.

    3. Click Create channel.

    4. Enter a channel name and description, select the appropriate type, and enter the required configurations.

      See Mapping Watch JSON Properties to OpenSearch Fields for an example of which JSON properties from an Elasticsearch watch map to OpenSearch notification channel fields.

    5. Click Create.

  3. Create an OpenSearch monitor equivalent to the Elasticsearch watch:

    1. From the OpenSearch navigation menu, under OpenSearch Plugins, select Alerting.

    2. Click the Monitors tab.

    3. Click Create monitor.

    4. Enter the required configurations, including actions under Triggers.

      See Mapping Watch JSON Properties to OpenSearch Fields for an example of which JSON properties from an Elasticsearch watch map to OpenSearch monitor fields.

    5. Click Create.

Note:

You can also manage notification channels and monitors by submitting API requests in the OpenSearch administration console. To access the console, from the Analytics menu, select Events, then Administration, then Console. See Notifications API and Alerting API in the OpenSearch documentation for information about the APIs. Be aware that the JSON structure for the API request bodies is different from the Elasticsearch structure.

Mapping Watch JSON Properties to OpenSearch Fields

The following table lists the JSON properties of an advanced Elasticsearch watch for a webhook and their corresponding fields in OpenSearch monitors or notification channels. The major difference is that you configure the webhook details in the notification channel rather than the monitor. This allows you to configure the webhook details once and reuse them in different monitors.

Elasticsearch JSON Field OpenSearch Feature OpenSearch Field
trigger.schedule Monitor Monitor details: Schedule
input.search.request.body Monitor Query
input.search.request.indices Monitor Select data: Index
condition Monitor Triggers
actions.<webhook_name> Monitor Triggers: Actions: Action name
actions.<webhook_name>.webhook Monitor Channels (instead of specifying the webhook details in the action, you select the notification channel to use, and configure the details in the notification channel.)
actions.<webhook_name>.webhook.scheme Notification channel Type
actions.<webhook_name>.webhook.host Notification channel Host
actions.<webhook_name>.webhook.port Notification channel Port
actions.<webhook_name>.webhook.method Notification channel Method
actions.<webhook_name>.webhook.path Notification channel Path
actions.<webhook_name>.webhook.params Notification channel Query parameters
actions.<webhook_name>.webhook.headers Notification channel Webhook headers
actions.<webhook_name>.webhook.body Monitor Triggers: Actions: Message
throttle_period_in_millis Monitor Triggers: Actions: Action configuration: Throttling

Recreating Snapshot Management Policies

Because you must add the snapshot repository folder in the OpenSearch configuration file before setting up snapshots, you cannot migrate snapshot management policies automatically.

To recreate your snapshot management policies in OpenSearch:

  1. Review your Elasticsearch Snapshot Lifecycle Management details:

    1. Go to the following location:

      https://<WebFQDN>/go/k/app/management/data/snapshot_restore/repositories
      

      The Elasticsearch Stack Management UI opens to the Repositories page in Snapshot and Restore.

    2. Select your snapshot repository and note the configuration settings, including any specifics about compression, chunk size, snapshot and restore bytes per section, and read-only settings.

    3. From the Elasticsearch Management menu, under Data, select Snapshot and Restore, and then click the Policies tab.

    4. Select a policy and note its settings.

  2. Add the mount directory as an OpenSearch repository path in an override file:

    1. In the command line, switch to the assure1 user:

      su - assure1
      
    2. Create the $A1BASEDIR/etc/opensearch.d/overrides directory, if it does not already exist, and switch to it:

      mkdir -p $A1BASEDIR/etc/opensearch.d/overrides
      cd $A1BASEDIR/etc/opensearch.d/overrides
      
    3. Create the opensearch.override file if it does not already exist, and add the following line to it:

      path.repo: ["/mnt/backups/opensearch"]
      
    4. Switch to the root user and set environment variables:

      su - root
      source <UA_home>/.bashrc
      

      where <UA_home> is the directory where you installed Unified Assurance, typically /opt/assure1.

    5. Run the ConfigHelper application:

      $A1BASEDIR/bin/ConfigHelper merge-restart Opensearch
      

      ConfigHelper merges the override and restarts OpenSearch.

  3. Recreate the repository in OpenSearch:

    1. From the Analytics menu, select Events, then Administration, and then Management.

    2. Click Snapshot Management.

    3. Under Snapshot Management, select Repositories.

    4. Click Create repository.

    5. Recreate the repository based on your Elasticsearch repository. The settings are the same, but in OpenSearch, you define the advanced settings using JSON.

  4. Recreate the policies in OpenSearch:

    1. Under Snapshot Management, select Snapshot policies.

    2. Click Create policy.

    3. Recreate the policies based on your Elasticsearch policies. The settings are the same, though in slightly different locations than when creating them in Elasticsearch, and, if you have already set up notification channels, you can configure notifications for snapshot activities.

Note:

You can also manage snapshots by submitting API requests in the OpenSearch administration console. To access the console, from the Analytics menu, select Events, then Administration, then Console. See Snapshot management and Snapshot APIs in the OpenSearch documentation for information about the API.

See Snapshot Management in the OpenSearch documentation for more information about snapshots. See Backup and Restore in Unified Assurance System Administrator's Guide for more information about backing up and restoring Unified Assurance.

Recreating Custom Machine Learning Functionality

Unified Assurance does not provide any machine learning functionality other than anomaly detection. No Elasticsearch machine learning functionality that you may have implemented, other than custom anomaly detection jobs, is migrated automatically. You are responsible for reviewing any custom Elasticsearch machine learning implementation and recreating it in OpenSearch if needed.

OpenSearch uses the ML Commons machine learning model, which is entirely different from the ElasticSearch model. See Machine learning in the OpenSearch documentation for information.

Updating Custom Unified Assurance Database Queries

When you update to Unified Assurance 6.1, the default Example Historical Schema Query database query for the Historical database is automatically updated to work with OpenSearch.

Because the structure of queries is different in OpenSearch, you must manually update any custom Historical database queries.

To update custom queries:

  1. From the main Unified Assurance navigation menu, select Configuration, then Databases, and then Queries.

  2. Locate your custom Historical database queries. You can filter the list by entering Historical under Schema.

  3. Update the SQL query to match the format required by the Historical database. You may need to experiment and test your queries using the Query Tools console. Use the following general guidelines:

    • Remove quotes from the index names.

    • Wrap date references in DATE_ADD()

    • Replace TODAY() with NOW()

For example, for Elasticsearch, the default Example Historical Schema Query used the following format:

SELECT Node, COUNT(*) AS Total FROM "eventanalytics-*" WHERE LastReported > TODAY() - INTERVAL 5 DAYS GROUP BY Node ORDER BY COUNT(*) DESC

For OpenSearch, this has been updated to:

SELECT Node, COUNT(*) AS Total FROM eventanalytics-* WHERE LastReported > DATE_ADD(NOW(), INTERVAL - 5 DAY) GROUP BY Node ORDER BY COUNT(*) DESC

Finalizing the Migration

To finalize the migration:

  1. Confirm that you are done migrating all data.

  2. Run AnalyticsWizard a final time to remove Elasticsearch.

    Caution:

    You cannot undo this.

    $A1BASEDIR/bin/historical/AnalyticsWizard --Finalize-Update
    
  3. When prompted to delete Elasticsearch stack references and data, type DELETE and press Enter.

    AnalyticsWizard deletes all Elasticsearch components, data, and directories, including Filebeat.

  4. Manually remove the Kibana directories and artifacts by running the following commands:

    rm -rf $A1BASEDIR/distrib/packages/vendorKibana*;
    rm -rf $A1BASEDIR/vendor/kibana;
    rm -rf $A1BASEDIR/distrib/config/vendorKibana;
    rm -rf $A1BASEDIR/distrib/records/vendorKibana;
    rm -rf $A1BASEDIR/etc/kibana.d;
    rm -rf $A1BASEDIR/www/go/k;
    
  5. Update the cluster configuration by running the following command:

    $A1BASEDIR/bin/cluster/clusterctl update-config
    
  6. Optionally, if you performed an in-place upgrade and scaled back your Elasticsearch memory settings before migrating, you can scale up the OpenSearch memory settings. Elasticsearch is no longer consuming extra memory.

    You can adjust the memory settings in the OPENSEARCH_JAVA_OPTS parameter of the $A1BASEDIR/vendor/opensearch/config/custom-env file.

The migration is complete.

Post-Migration Tasks

After migrating your data:

  1. Set up Observability Analytics, including enabling the webhook, setting up CAPE functionality, and enabling anomaly detection. See Post Install Actions for Observability Analytics in Unified Assurance Concepts.

  2. Complete the remaining post-update tasks, such as redeploying microservices. See Post Update Tasks.