Troubleshooting the Oracle NoSQL Database Migrator

Learn about the general challenges that you may face while using the , and how to resolve them.

Migration has failed. How can I resolve this?

A failure of the data migration can be because of multiple underlying reasons. The important causes are listed below:

Table 1-5 Migration Failure Causes

Error Message Meaning Resolution
Failed to connect to Oracle NoSQL Database The migrator could not establish a connection with the NoSQL Database.
  • Check if the values of the storeName and helperHosts attributes in the configuration JSON file are valid and that the hosts are reachable.
  • For a secured store, verify if the security file is valid with correct user name and password values.
Failed to connect to Oracle NoSQL Database Cloud Service The migrator could not establish a connection with the Oracle NoSQL Database Cloud Service.
  • Verify if the endpoint URL or region name specified in the configuration JSON file is correct.
  • Check if the OCI credentials file is available in the path specified in the configuration JSON file.
  • Ensure that the OCI credentials provided in the OCI credentials are valid.
Table not found The table identified for the migration could not be located by the NoSQL Database Migrator.

For the Source:

  • Verify if the table is present in the source database.
  • Ensure that the table is qualified with its namespace in the configuration JSON file, if the table is created in a non-default namespace.
  • Verify if you have the required read/write authorization to access the table.
  • If the source is Oracle NoSQL Database Cloud Service, verify if the valid compartment name is specified in the configuration JSON file, and ensure that you have the required authorization to access the table.

For the Sink:

  • Verify if the table is present in the Sink. If it does not exist, you must either create the table manually or use the schemaInfo config to create it through the migration.
DDL Execution failed The DDL commands provided in the input schema definition file is invalid.
  • Check the syntax of the DDL commands in the schemaPath file.
  • Ensure that there is only one DDL statement per line in the schemaPath file.
failed to write record to the sink table with java.lang.IllegalArgumentException The input record is not matching with the table schema of the sink.
  • Check if the data types and column names specified in the target sink table are matching with sink table schema.
  • If you applied any transformation, check if the transformed records are matching with the sink table schema.
Request timeout The source or sink's operation did not complete within the expected time.
  • Verify the network connection.
  • Check if the NoSQL Database is up and running.
  • Try to increase requestTimeout value in the configuration JSON file.

What should I consider before restarting a failed migration?

When a data migration task fails, the sink will be at an intermediate state containing the imported data until the point of failure. You can identify the error and failure details from the logs and restart the migration after diagnosing and correcting the error. A restarted migration starts over, processing all data from the beginning. There is no way to checkpoint and restart the migration from the point of failure. Therefore, NoSQL Database Migrator overwrites any record that was migrated to the sink already.

Best Practices

The time taken for the data migration depends on multiple factors such as volume of data being migrated, network speed, current load on the database. In case of a cloud service, the speed of migration also depends on the read throughput and the write throughput provisioned. So, to improve the migration speed, you can:

  • Consider running the migration during off-hours when the load on the database is less.
  • Consider allocating the VM where the NoSQL Database Migrator will run, defining the data source, and defining the data sink in the same OCI region to ensure minimal network latencies.
  • In case of Oracle NoSQL Database Cloud Service, verify if the storage allocated for table is sufficient. If the NoSQL Database Migrator is not creating the table, you can increase the write throughput. If the migrator is creating the table, consider specifying a higher value for the schemaInfo.writeUnits parameter in the sink configuration. After the data migration completes, you can lower this value.

    Note:

    There is no limitation on the number of times you can increase the throughput or storage limits. You can decrease the throughput or storage limits only up to 4 times in a 24-hour period. see Cloud Limits and Sink Configuration Templates.
The Migrator utility is inherently designed to achieve higher migration speed by processing multiple streams in parallel. The following points suggest how to leverage this capability for various migration scenarios:
  • Migrating from Oracle NoSQL Database Cloud Service/on-premises tables to File system/Object Storage sink:

    Set useMultiFiles and chunkSize parameters in the Migrator configuration. The useMultiFiles parameter creates multiple files/objects at the sink. The chunkSize parameter determines the size of each file during data export.

    For example: To export 2 GB data, setting the useMultiFiles parameter to true and chunkSize parameter to 40MB causes the Migrator utility to write 50 files of 40 MB each.

    Note:

    The Migrator utility can currently process 100 streams in parallel. Therefore, set the chunkSize parameter to an optimal file size value such that the Migrator utility creates a maximum of 100 files during data export.
  • Migrating from a File system/Object Storage to Oracle NoSQL Database Cloud Service/on-premises sink:
    • If your File system/Object Storage has exported data containing multiple files/objects from a previous migration, the Migrator utility automatically processes files in parallel to achieve higher migration speed while importing the data.
    • If you are migrating data from other external File systems/Object Storage, consider splitting data into multiple files/multiple objects at the data source.

    Note:

    • In case of Oracle NoSQL Database Cloud Service sink, you must configure sufficient write throughput and table write units percentage to process up to 100 streams during the migration operation.
    • If you have more than 100 source files, the Migrator utility creates a maximum of 100 streams and distributes the files among them during data import. The files in each stream will be sequentially migrated.

I have a long running migration involving huge datasets. How can I track the progress of the migration?

You can enable additional logging to track the progress of a long-running migration. To control the logging behavior of Oracle NoSQL Database Migrator, you must set the desired level of logging in the logging.properties file. This file is provided with the NoSQL Database Migrator package and available in the directory where the Oracle NoSQL Database Migrator was unpacked. The different levels of logging are OFF, SEVERE, WARNING, INFO, FINE, and ALL in the order of increasing verbosity. Setting the log level to OFF turns off all the logging information, whereas setting the log level to ALL provides the full log information. The default log level is WARNING. All the logging output is configured to go to the console by default. You can see comments in the logging.properties file to know about each log level.