Migrating the Historical Database
Migrating your Oracle Communications Unified Assurance Historical database servers from Elasticsearch to OpenSearch involves performing the following tasks:
In a redundant environment, perform all of the tasks for the primary server, and then repeat them for the secondary server.
Upgrading and Updating
The upgrade or update tasks required depend on your starting version. The following table lists the supported migration paths for each version and interim steps. Because downtime is required for each major version upgrade and operating system upgrade, you will need to schedule a maintenance window.
Current Version | Version 5 Step | Operating System Upgrade | Version 6 Step | Documentation |
---|---|---|---|---|
4.87 (on Linux 6 or 7) | Upgrade to 5.5.24 | Upgrade to Linux 8 | Upgrade to 6.1 | Upgrading from Assure1 to Unified Assurance |
5.x (Linux 7) | Update to 5.5.24 | Upgrade to Linux 8 | Upgrade to 6.1 | Upgrading from Assure1 to Unified Assurance |
6.0.x (Linux 7) | N/A | Upgrade to Linux 8 | Update to 6.1 | Upgrading Unified Assurance to a New Linux System or Upgrading Unified Assurance on an Existing Linux System |
6.0.4 or higher (Linux 8) | N/A | N/A | Update to 6.1 | Updating Unified Assurance |
-
If you are currently running version 4 or 5 of Unified Assurance (then called Assure1):
-
Upgrade or update to version 5.5.24 package.
The documentation for upgrading from version 4 to version 5 is available from My Oracle Support. See Getting Earlier Versions of the Documentation in Unified Assurance Installation Guide for information about finding the documentation.
-
Upgrade your operating system to Linux 8.
If you are currently using Linux 6, Oracle recommends installing a new Linux 8 system and synchronizing your Unified Assurance data to it (called a forklift upgrade) rather than attempting to perform an in-place upgrade. If you want to make an in-place upgrade, you must first upgrade to Linux 7, then Linux 8.
If you are currently using Linux 7, you can perform an in-place upgrade using the Leapp utility, or you can perform a forklift upgrade onto a new Linux 8 system.
-
Upgrade to Unified Assurance 6.1.
See Upgrading from Assure1 to Unified Assurance in Unified Assurance Installation Guide for complete information.
-
-
If you are currently running Unified Assurance version 6 on Linux 7:
-
Upgrade your operating system to Linux 8.
-
Update your environment to the latest version of Unified Assurance.
See Upgrading Unified Assurance to a New Linux System or Upgrading Unified Assurance on an Existing Linux System in Unified Assurance Installation Guide for complete information.
-
-
If you are currently running Unified Assurance version 6 on Linux 8, update your environment to the latest version of Unified Assurance.
See Updating Unified Assurance in Unified Assurance Installation Guide for complete information.
When you have performed all upgrades and updates and have Unified Assurance version 6.1 on your system, review the information in About Automatically Migrating Artifacts.
About Automatically Migrating Artifacts
When you run Package update at the end of updating your Historical database servers to the latest version of Unified Assurance, the command also runs AnalyticsWizard to automatically migrate artifacts from Elasticsearch to OpenSearch.
AnalyticsWizard does the following:
-
Sets up Elasticsearch on port 8200 with Kibana on port 4601.
You can access this at the following location:
http://<WebFQDN>/go/k
where <WebFQDN> is the web FQDN of your Unified Assurance presentation server.
-
Sets up OpenSearch on port 9200 with OpenSearch Dashboards on port 5601.
You can access this:
-
From the main navigation menu of the Unified Assurance UI by selecting Analytics, then Event Analytics, and then Home.
-
At the following location:
http://<WebFQDN>/go/o
-
-
Migrates the Elasticsearch components listed in Automatically Migrated Artifacts.
You can monitor or review the migration progress by tailing the following log file:
$A1BASEDIR/logs/MigrateHistoricalObjects.log
For any failures, you can find the Elasticsearch export and OpenSearch import files in the following directory:
$A1BASEDIR/tmp/migration/
Migrating Indexes
After the package update process that automatically migrates artifacts is complete, you can migrate your Historical database indexes.
-
If you are migrating Flow Analytics indexes, confirm that you have deployed the Flow Collector microservice. You can check this in the UI by selecting the Configuration menu, then Microservices, and then Installed.
If it is not deployed, deploy it now. See Deploying a Microservice by Using the UI and Flow Collector in Unified Assurance Implementation Guide for information.
-
In the command line of the Historical database server, run MigrateHistoricalData to either migrate an entire index or migrate in time-based increments:
-
To migrate an entire index:
$A1BASEDIR/bin/historical/MigrateHistoricalData --Index <index_pattern>
where <index_pattern> is the index pattern to migrate. For Event Analytics, use eventanalytics*. For Flow Analytics, use network-*.
Caution:
If you do not specify an index pattern, the utility will iterate through all index patterns, including logs. This requires significant storage space and memory. Oracle does not recommend this.
-
To migrate a time-based increment, starting with the most recent increment and moving back in time:
$A1BASEDIR/bin/historical/MigrateHistoricalData --Index <index_pattern> --From <from_date> --To <to_date>
where:
-
<from_date> is the date of the oldest document in the time-based increment to migrate, in YYYY-MM-DD format.
-
<to_date> is the date of the newest document in the time-based increment to migrate, in YYYY-MM-DD format.
If you only set <from_date>, then <to_date> defaults to the current date. If you only set <to_date>, then <from_date> defaults to 0, representing the oldest possible date.
The time ranges are exclusive. This means that if you set the range to --From 2025-03-10 --To 2025-04-10, indexes starting at 00:00 on March 11th to 24:59 on April 9th will be migrated.
-
-
-
Monitor the migration progress by tailing the following log file:
$A1BASEDIR/logs/MigrateHistoricalData.log
-
Optionally, if you are migrating data incrementally and want to save disk space, you can pause between increments and delete the successfully migrated data from Elasticsearch.
The index migration is complete. Proceed to Manual Migration Tasks.
Manual Migration Tasks
You must manually recreate any of the following Elasticsearch components that apply in your environment:
-
Elasticsearch dashboards created with Kibana Lens or Canvas. Other dashboards have been migrated automatically. See Recreating Lens or Canvas Dashboards.
-
Custom Elasticsearch Watcher watches. These are replaced by OpenSearch notification channels and monitors; the default notification channel and monitor that send detected anomalies to the Event database are set up automatically. See Recreating Watches.
-
Elasticsearch Snapshot Management policies for automated backups. See Recreating Snapshot Management Policies.
-
Custom Unified Assurance database queries. The default sample query is included automatically. See Updating Custom Unified Assurance Database Queries.
Recreating Lens or Canvas Dashboards
Unified Assurance included default Kibana dashboards for Elasticsearch. When you update to Unified Assurance 6.1, the same default dashboards are automatically included for OpenSearch.
In Kibana, you could also edit or create new dashboards, and optionally create visualizations using the Kibana Lens or Canvas features. AnalyticsWizard automatically migrates most of your custom dashboards, but Lens or Canvas visualizations are not directly compatible with OpenSearch Dashboards. You must manually recreate Lens or Canvas dashboards in OpenSearch.
OpenSearch dashboards offer a similar drag and drop visualization building functionality using VisBuilder. See Building data visualizations in the OpenSearch Dashboards documentation for more information.
Recreating Watches
In Elasticsearch, the Watcher feature sent detected anomalies to the Event database with default watches. You could also create custom watches.
OpenSearch uses two separate components to reproduce watches: notification channels and monitors. Setting up notification channels lets you reuse the same channel (such as an email group or the webhook for the Event database) across multiple monitors.
When you update to Unified Assurance 6.1, the default notification channel and monitor that send detected anomalies to the Event database are set up automatically. Because of the change in functionality, you must recreate any custom watches.
You can reproduce the following Elasticsearch Watcher action types with OpenSearch notification channels:
-
Email: You can create email notification channels and configure email recipient groups.
-
Slack: Configured with a Slack webhook URL.
-
Webhook: Use the custom webhook channel type.
The following Elasticsearch Watcher action types are not supported in OpenSearch:
-
Logging
-
Index
-
PagerDuty
-
Jira
OpenSearch supports some additional notification channels. See Notifications in the OpenSearch documentation for details.
To set up Watcher equivalents:
-
Review your Elasticsearch watch configuration:
-
Go to the following location:
https://<WebFQDN>/go/k/app/management/insightsAndAlerting/watcher/watches
The Elasticsearch Stack Management UI opens to the Watcher page.
-
Select the Edit button next to your custom watch.
-
Copy the JSON data or make note of the configuration options.
-
-
Create an OpenSearch notification channel equivalent to the Elasticsearch watch action:
-
From the main Unified Assurance navigation menu, select Analytics, then Events, then Administration, and then Management.
-
Click Notifications.
-
Click Create channel.
-
Enter a channel name and description, select the appropriate type, and enter the required configurations.
See Mapping Watch JSON Properties to OpenSearch Fields for an example of which JSON properties from an Elasticsearch watch map to OpenSearch notification channel fields.
-
Click Create.
-
-
Create an OpenSearch monitor equivalent to the Elasticsearch watch:
-
From the OpenSearch navigation menu, under OpenSearch Plugins, select Alerting.
-
Click the Monitors tab.
-
Click Create monitor.
-
Enter the required configurations, including actions under Triggers.
See Mapping Watch JSON Properties to OpenSearch Fields for an example of which JSON properties from an Elasticsearch watch map to OpenSearch monitor fields.
-
Click Create.
-
Note:
You can also manage notification channels and monitors by submitting API requests in the OpenSearch administration console. To access the console, from the Analytics menu, select Events, then Administration, then Console. See Notifications API and Alerting API in the OpenSearch documentation for information about the APIs. Be aware that the JSON structure for the API request bodies is different from the Elasticsearch structure.
Mapping Watch JSON Properties to OpenSearch Fields
The following table lists the JSON properties of an advanced Elasticsearch watch for a webhook and their corresponding fields in OpenSearch monitors or notification channels. The major difference is that you configure the webhook details in the notification channel rather than the monitor. This allows you to configure the webhook details once and reuse them in different monitors.
Elasticsearch JSON Field | OpenSearch Feature | OpenSearch Field |
---|---|---|
trigger.schedule | Monitor | Monitor details: Schedule |
input.search.request.body | Monitor | Query |
input.search.request.indices | Monitor | Select data: Index |
condition | Monitor | Triggers |
actions.<webhook_name> | Monitor | Triggers: Actions: Action name |
actions.<webhook_name>.webhook | Monitor | Channels (instead of specifying the webhook details in the action, you select the notification channel to use, and configure the details in the notification channel.) |
actions.<webhook_name>.webhook.scheme | Notification channel | Type |
actions.<webhook_name>.webhook.host | Notification channel | Host |
actions.<webhook_name>.webhook.port | Notification channel | Port |
actions.<webhook_name>.webhook.method | Notification channel | Method |
actions.<webhook_name>.webhook.path | Notification channel | Path |
actions.<webhook_name>.webhook.params | Notification channel | Query parameters |
actions.<webhook_name>.webhook.headers | Notification channel | Webhook headers |
actions.<webhook_name>.webhook.body | Monitor | Triggers: Actions: Message |
throttle_period_in_millis | Monitor | Triggers: Actions: Action configuration: Throttling |
Recreating Snapshot Management Policies
Because you must add the snapshot repository folder in the OpenSearch configuration file before setting up snapshots, you cannot migrate snapshot management policies automatically.
To recreate your snapshot management policies in OpenSearch:
-
Review your Elasticsearch Snapshot Lifecycle Management details:
-
Go to the following location:
https://<WebFQDN>/go/k/app/management/data/snapshot_restore/repositories
The Elasticsearch Stack Management UI opens to the Repositories page in Snapshot and Restore.
-
Select your snapshot repository and note the configuration settings, including any specifics about compression, chunk size, snapshot and restore bytes per section, and read-only settings.
-
From the Elasticsearch Management menu, under Data, select Snapshot and Restore, and then click the Policies tab.
-
Select a policy and note its settings.
-
-
Add the mount directory as an OpenSearch repository path in an override file:
-
In the command line, switch to the assure1 user:
su - assure1
-
Create the $A1BASEDIR/etc/opensearch.d/overrides directory, if it does not already exist, and switch to it:
mkdir -p $A1BASEDIR/etc/opensearch.d/overrides cd $A1BASEDIR/etc/opensearch.d/overrides
-
Create the opensearch.override file if it does not already exist, and add the following line to it:
path.repo: ["/mnt/backups/opensearch"]
-
Switch to the root user and set environment variables:
su - root source <UA_home>/.bashrc
where <UA_home> is the directory where you installed Unified Assurance, typically /opt/assure1.
-
Run the ConfigHelper application:
$A1BASEDIR/bin/ConfigHelper merge-restart Opensearch
ConfigHelper merges the override and restarts OpenSearch.
-
-
Recreate the repository in OpenSearch:
-
From the Analytics menu, select Events, then Administration, and then Management.
-
Click Snapshot Management.
-
Under Snapshot Management, select Repositories.
-
Click Create repository.
-
Recreate the repository based on your Elasticsearch repository. The settings are the same, but in OpenSearch, you define the advanced settings using JSON.
-
-
Recreate the policies in OpenSearch:
-
Under Snapshot Management, select Snapshot policies.
-
Click Create policy.
-
Recreate the policies based on your Elasticsearch policies. The settings are the same, though in slightly different locations than when creating them in Elasticsearch, and, if you have already set up notification channels, you can configure notifications for snapshot activities.
-
Note:
You can also manage snapshots by submitting API requests in the OpenSearch administration console. To access the console, from the Analytics menu, select Events, then Administration, then Console. See Snapshot management and Snapshot APIs in the OpenSearch documentation for information about the API.
See Snapshot Management in the OpenSearch documentation for more information about snapshots. See Backup and Restore in Unified Assurance System Administrator's Guide for more information about backing up and restoring Unified Assurance.
Recreating Custom Machine Learning Functionality
Unified Assurance does not provide any machine learning functionality other than anomaly detection. No Elasticsearch machine learning functionality that you may have implemented, other than custom anomaly detection jobs, is migrated automatically. You are responsible for reviewing any custom Elasticsearch machine learning implementation and recreating it in OpenSearch if needed.
OpenSearch uses the ML Commons machine learning model, which is entirely different from the ElasticSearch model. See Machine learning in the OpenSearch documentation for information.
Updating Custom Unified Assurance Database Queries
When you update to Unified Assurance 6.1, the default Example Historical Schema Query database query for the Historical database is automatically updated to work with OpenSearch.
Because the structure of queries is different in OpenSearch, you must manually update any custom Historical database queries.
To update custom queries:
-
From the main Unified Assurance navigation menu, select Configuration, then Databases, and then Queries.
-
Locate your custom Historical database queries. You can filter the list by entering Historical under Schema.
-
Update the SQL query to match the format required by the Historical database. You may need to experiment and test your queries using the Query Tools console. Use the following general guidelines:
-
Remove quotes from the index names.
-
Wrap date references in DATE_ADD()
-
Replace TODAY() with NOW()
-
For example, for Elasticsearch, the default Example Historical Schema Query used the following format:
SELECT Node, COUNT(*) AS Total FROM "eventanalytics-*" WHERE LastReported > TODAY() - INTERVAL 5 DAYS GROUP BY Node ORDER BY COUNT(*) DESC
For OpenSearch, this has been updated to:
SELECT Node, COUNT(*) AS Total FROM eventanalytics-* WHERE LastReported > DATE_ADD(NOW(), INTERVAL - 5 DAY) GROUP BY Node ORDER BY COUNT(*) DESC
Finalizing the Migration
To finalize the migration:
-
Confirm that you are done migrating all data.
-
Run AnalyticsWizard a final time to remove Elasticsearch.
Caution:
You cannot undo this.
$A1BASEDIR/bin/historical/AnalyticsWizard --Finalize-Update
-
When prompted to delete Elasticsearch stack references and data, type DELETE and press Enter.
AnalyticsWizard deletes all Elasticsearch components, data, and directories, including Filebeat.
-
Manually remove the Kibana directories and artifacts by running the following commands:
rm -rf $A1BASEDIR/distrib/packages/vendorKibana*; rm -rf $A1BASEDIR/vendor/kibana; rm -rf $A1BASEDIR/distrib/config/vendorKibana; rm -rf $A1BASEDIR/distrib/records/vendorKibana; rm -rf $A1BASEDIR/etc/kibana.d; rm -rf $A1BASEDIR/www/go/k;
-
Update the cluster configuration by running the following command:
$A1BASEDIR/bin/cluster/clusterctl update-config
-
Optionally, if you performed an in-place upgrade and scaled back your Elasticsearch memory settings before migrating, you can scale up the OpenSearch memory settings. Elasticsearch is no longer consuming extra memory.
You can adjust the memory settings in the OPENSEARCH_JAVA_OPTS parameter of the $A1BASEDIR/vendor/opensearch/config/custom-env file.
The migration is complete.
Post-Migration Tasks
After migrating your data:
-
Set up Observability Analytics, including enabling the webhook, setting up CAPE functionality, and enabling anomaly detection. See Post Install Actions for Observability Analytics in Unified Assurance Concepts.
-
Complete the remaining post-update tasks, such as redeploying microservices. See Post Update Tasks.