Migrate to Autonomous AI Database
Oracle Zero Downtime Migration (ZDM) can be used to facilitate complex database migrations with maximum availability and minimal disruption. It uses the best practices of Oracle’s Maximum Availability Architecture (MAA) during the migration process. This topic outlines the use of Oracle Zero Downtime Migration for migrations to Autonomous AI Database. This capability supports moving databases across different platforms and versions, allowing deployment in multicloud environments.
Target Environment: Autonomous AI Database
Autonomous AI Database is a database service that automatically manages tasks such as provisioning, security, patching, and tuning. This automation reduces operational costs and helps prevent human error. When you deploy Autonomous AI Database on Oracle AI Database@Google Cloud, it uses a native connection with low latency to Google Cloud applications. This setup helps maintain high performance and makes it easier to integrate with other Google Cloud services.
Key Benefits of Using Oracle Zero Downtime Migration for Autonomous AI Database MigrationsTable 1-1
Feature Description Operational Simplicity Oracle Zero Downtime Migration provides an end-to-end, automated, and streamlined orchestration process, often described as a single-button experience, to reduce the complexity of fleet-wide migrations. Minimal Risk The tool aligns with MAA best practices and includes intelligent pre-validation checks to reduce the risk of migration failures. High Flexibility Oracle Zero Downtime Migration's logical migration paths support moving databases between both identical and different database versions and platforms, enabling modernization during the migration. Integrated Performance You can benefit from the superior, automated management of Autonomous AI Database combined with the low-latency network performance provided by the Oracle AI Database@Google Cloud infrastructure. Cost-Effectiveness Oracle Zero Downtime Migration is available at no cost, allowing organizations to leverage its advanced automation without an additional licensing expense. Supported Migration Workflows
Oracle Zero Downtime Migration supports two workflows that are designed to help move databases between different source and target environments.- Logical Online Migration
- Downtime Profile: Minimal (Near Zero).
- Compatibility: Supports migrations between the same or different database versions and platforms.
- Process: This uses Oracle Data Pump export and import to create the target database. Google Cloud Managed NFS Server provides an NFS file share to store the Oracle Data Pump dump files. Oracle GoldenGate keeps the source and target databases in sync to achieve a minimal downtime migration.
- Logical Offline Migration
- Downtime Profile: Offline (Requires a planned application maintenance window).
- Compatibility: Supports migrations between the same or different database versions and platforms.
- Process: This uses Oracle Data Pump export and import to create the target database. Google Cloud Managed NFS Server provides an NFS file share to store the Oracle Data Pump dump files.
Note
Logical migration methods are recommended for migrating to Autonomous AI Database because they support moving data across different platforms and database versions, which is often needed when using a managed cloud service.
For more information on Oracle Zero Downtime Migration, see the following resources:- Logical Online Migration
In multicloud setups, you need to transfer data between different cloud providers or maintain it in one environment while querying or accessing it from another. Oracle Database's DBMS_CLOUD PL/SQL package simplifies this by enabling direct access to object storage services such as Oracle Cloud Infrastructure (OCI) Object Storage and Google Cloud Storage and allowing data to be queried via external tables or loaded directly into database tables.
This document provides a step-by-step guide on accessing and importing data from Google Cloud Storage into an Oracle Database.
Solution Overview
This solution demonstrates how a Google Cloud Storage (GCS) can be used as a landing zone for backups and files.
The Oracle DBMS_CLOUD package lets you query data directly from Google Cloud Storage buckets (Parquet, CSV, JSON, etc.) as external tables without moving it, join it in real time with Oracle tables for unified analytics, bulk import files into Oracle Database efficiently with parallel loading, minimize data duplication and costs in multicloud setups, and support hybrid scenarios like blending ERP data with GCS-stored logs, IoT streams, or ML features for faster insights and reporting. The architecture illustrates a workflow in which the source is an Oracle Database running on Linux or Oracle Autonomous AI Database, and the target is an Oracle Database environment running on Google Cloud infrastructure. Google Cloud Storage is used by both the source and target database systems, enabling RMAN backups or export dumps to be written once and restored directly, without additional file transfer steps.
By leveraging Google Cloud Storage as shared storage, migration workflows benefit from a managed service that supports movement of data with ease.

Prerequisites
- Oracle Database Environment
Source : Oracle Database platform, such as Oracle Autonomous AI Database, Oracle RAC, or a standalone Oracle Database instance on Linux.
Target : Oracle Autonomous AI Database running on Oracle AI Database@Google Cloud.
- Google Cloud Storage Bucket
- A bucket containing the data file(s) such Oracle Data Pump dump file.
- Network Connectivity
- For Oracle Database, internet connectivity or
PrivateLinkusing Oracle Interconnect for Google Cloud is supported. - For Oracle AI Database@Google Cloud, internal connectivity within the same cloud region is supported by default.
- For Oracle Database, internet connectivity or
- Credentials
- A Google Cloud Access Key and Secret (HMAC key) with bucket access permissions.
- Configuration
- Create Google Cloud Storage Access Credentials
- In the Google Cloud Console, navigate to Cloud Storage.
- Select Settings from the left menu, and then select the Interoperability tab.

- Scroll down to the Access keys for your user account section, select the Create a key button to generate an Access key and Secret. Take note of the Access key and Secret as these are required to authenticate Oracle Database access.

- Determine Object URL
- In the Google Cloud Console, navigate to Cloud Storage.
- From the left menu, select Buckets, and then select the target bucket that you are using.

- From the Bucket details page, locate the object. For example, Oracle Data Pump export file which you want to import.
- Construct the public object URL in the form.
https://<bucket_name>.storage.googleapis.com/<object_name>For example:
- Bucket name:
exadbtest - Object name:
emp.dmp - Object URL:
https://exadbtest.storage.googleapis.com/emp.dmp

- Bucket name:
- Create Google Cloud Storage Access Credentials
- Oracle Database Setup
- Create Oracle Credential
- Using the DBMS_CLOUD.CREATE_CREDENTIAL PL/SQL procedure, create a credential in the database that stores your GCS access key and secret:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => '<Credential_Name>', username => '<GCS_Access_Key>', password => '<GCS_Secret>' ); END; /For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'GCP_CRED', username => 'GOOGXXXXXXXXXXXX', password => 'xs9njBZ+hykXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' ); END; / - After running the PL/SQL procedure, the credential is stored and ready for use by DBMS_CLOUD. For more information about the parameters, see CREATE_CREDENTIAL Procedure.
- Using the DBMS_CLOUD.CREATE_CREDENTIAL PL/SQL procedure, create a credential in the database that stores your GCS access key and secret:
- Verify Access to Google Cloud Storage
- To ensure that the database can reach your Google Cloud Storage bucket, run the following:
SELECT object_name FROM dbms_cloud.list_objects( credential_name => 'GCP_CRED', uri => 'https://exadbtest.storage.googleapis.com' ); - The output should list the object(s), confirming successful access. For example:
OBJECT_NAME -------------------------------------------------------------------------------- emp.dmp employee.csv
- To ensure that the database can reach your Google Cloud Storage bucket, run the following:
- Create Oracle Credential
- Loading Data
After creating the necessary credential for Google Cloud Storage, you can load or access data into Oracle Database using any of the following methods: DBMS_CLOUD.COPY_DATA procedure, Oracle Data Pump Import (impdp), DBMS_CLOUD.CREATE_EXTERNAL_TABLE, DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE, the Data Studio Load tool (UI in Autonomous AI Database), or DBMS_CLOUD.CREATE_HYBRID_PART_TABLE for hybrid partitioned tables.
This step demonstrates DBMS_CLOUD.COPY_DATA as well as Oracle Data Pump Import (impdp).
- Using DBMS_CLOUD.COPY_DATA Procedure
- Create a table in your database where you want to load the data.
CREATE TABLE emp (id NUMBER, name VARCHAR2(64)); - Import data from the Google Cloud Storage bucket to your Autonomous AI Database.
- Specify the table name and the GCP credential name followed by the Google Cloud Storage object URL.
- Use the DBMS_CLOUD.COPY_DATA procedure to load data from your Google Cloud Storage bucket into a table. The
file_uri_listparameter specifies the path to your files in Google Cloud Storage.BEGIN DBMS_CLOUD.COPY_DATA( table_name => 'YOUR_TARGET_TABLE', credential_name => 'GCP_CRED', file_uri_list => 'https://exadbtest.storage.googleapis.com/employee.csv', -- Or a list of files format => json_object('type' value 'CSV', 'skipheaders' value '1') -- Specify file format options ); END; /For more information about the parameters, see COPY_DATA Procedure.
- Once you have successfully imported data from Google Cloud Storage to your Autonomous AI Database, you can run this statement and verify the data in your table.
SELECT * FROM emp;
- Create a table in your database where you want to load the data.
- Import Data Using Data Pump
The import command can be executed either on the VM cluster or on an external virtual machine with authorized SQL*Plus access to the target database.
- Create a parameter file
imp_gcp.parwith the following contents:directory=DATA_PUMP_DIR credential=GCP_CRED schemas=emp remap_tablespace=USERS:DATA dumpfile=https://exadbtest.storage.googleapis.com/emp.dmp - Run the import from VM Cluster or Client. Ensure that Oracle client is installed and you can connect to the Autonomous AI Database:
impdp userid=ADMIN@demo_db parfile=imp_gcp.par - A successful run loads the data into your Oracle Database. Example of a successful completion message:
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed.
- Create a parameter file
- Using DBMS_CLOUD.COPY_DATA Procedure
- Oracle Database Environment
In multicloud environments, while using resources from multiple cloud vendors, you usually deal with migrating data across the environments or keeping it in one cloud environment while accessing it from another. The Oracle Database DBMS_CLOUD PL/SQL package allows you to attach and access data on Google Cloud Filestore file shares (or import it into an Oracle Database). This document provides a step-by-step guide on accessing and importing data from Google Cloud Filestore file shares into an Oracle Database.
Key Advantages for Migration and Shared Filesystem Use Cases
- Scalability and Managed Service Predefined performance tiers and capacity scaling (up to 100 TiB) eliminate the need to deploy and manage custom NFS servers.
- Oracle Database Integration Supports NFSv3 and NFSv4.1, enabling direct read/write access for Oracle Data Pump, RMAN backups, and database file staging.
- Cost Efficiency Pay-only-for-provisioned-capacity model is suitable for temporary migration workloads and avoids self-managed infrastructure overhead.
- High Availability and Reliability Fully managed with zonal or regional redundancy (up to 99.99% regional SLA in Filestore Regional), offloading patching, failover, and maintenance.
- Security Encryption at rest and in transit, VPC-based access controls, firewall rules, and export policies protect data during migration.
Solution Overview
This solution demonstrates how a Google Cloud Filestore file share can be used as a landing zone for RMAN backups, simplifying and standardizing the Oracle Database migration process.
The architecture illustrates a workflow in which the source is an Oracle Database running on Linux or Oracle Autonomous AI Database, and the target is an Oracle Database environment running on Google Cloud infrastructure. The Filestore NFS file share is mounted and used by both the source and target database systems, enabling RMANbackups to be written once and restored directly, without additional file transfer steps. By leveraging Google Filestore as shared storage, migration workflows benefit from a managed NFS service that supports RMAN backup, restore, and recovery operations across environments.

Prerequisites
The following prerequisites are required to complete this solution:- Source database running on a supported Oracle Database platform, such as Oracle Autonomous AI Database, Oracle Real Application Clusters, or a standalone Oracle Database instance on Linux.
- Target Oracle Autonomous AI Database running on Oracle AI Database@Google Cloud
- Google Cloud Filestore file share created and available, with a mounted NFS export on the source and target database hosts.
- Network connectivity established between the source environment and Google Cloud, with appropriate routing and firewall rules to allow NFS and database traffic.
Setting Up Google Cloud Filestore
- From Google Cloud console, navigate to Filestore.
- From the left menu, select Instances.
- Select the Create Instance button and then complete the following substeps:
- Enter a unique name in the Instance ID field for the Google Cloud Filestore instance. The ID must contain lowercase letters, numbers, and hyphens. It must start with a letter.
- The Description field is optional.
- From the Configure service tier section, select an appropriate service tier based on your migration workload requirements. From the Instance type section, choose your instance type. Instance type affects capacity, performance, scalability, durability, and cost.
- Choose your Storage type based on your requirements.

- After choosing your storage type, enter the capacity value in the Capacity field.
For better performance and lower networking cost, place your instance in the same region as the VMs that will connect to it. Choice is permanent.
- Select the Capacity range that suits your file system.
- Configure your performance based on your workload and scale.
- Select the Region and Zone where the Google Cloud Filestore instance will be created. This must align with the network location of the database hosts that will access the file share.
- From the Set up connections section, select the network and address range that clients will use to access your instance.
- From the VPC network dropdown list, select your VPC.
- From the Allocated IP range section, choose the Use an automatically allocated IP range (recommended) option. This IP range will be used to create subnets.
- From the Configure your file share section, enter your file share name in the field. Choice is permanent. You can use lowercase letters, numbers, and underscores. The file share name must start with a letter. Choose your Access control.

- The Create labels, Tags sections are optional.
- Review your information and select the Create button.

- Once the creation is complete, you can review it from the Instances list.

- After the Google Cloud Filestore instance is created, the NFS file share can be mounted on the source and target Oracle Database hosts and used as a shared landing zone for migration artifacts such as RMAN backups, Oracle Data Pump exports, and transportable tablespace files.
- From Google Cloud console, navigate to Filestore and select the instance you created previously.
- Select the Overview tab and then scroll down to NFS mount point section. This provides the mount command that is required to mount the Google Cloud Filestore on the Linux Client VM.

DNS Setup in OCI
To resolve the NFS server name, create an A‑record in OCI DNS:
- From the Autonomous Database details page in Google Cloud Console, click Manage in OCI.
- Navigate to the virtual cloud network in the Network section.
- On the network details page, click DNS Resolver.
- On the private resolver details page, click the default private view.
- Create a new zone (e.g., nfsmount.gcp).
- Add a record (e.g., nfs.nfsmount.gcp) pointing to the actual IP address of the Google Cloud Filestore NFS server.
- Publish the changes.
- Update the Network Security Group (NSG) in OCI to allow traffic from the VPC where the NFS server resides.
- Once complete, the FQDN from Oracle Database will resolve to the Google Filestore NFS endpoint.
Grant Network ACLs in Autonomous AI Database
- Set the
ROUTE_OUTBOUND_CONNECTIONSdatabase property toPRIVATE_ENDPOINTto enforce that all outgoing connections to a target host use the private endpoint’s egress rules.ALTER DATABASE PROPERTY SET ROUTE_OUTBOUND_CONNECTIONS = 'PRIVATE_ENDPOINT'; - Use the
DBMS_NETWORK_ACL_ADMINpackage to grant the requiredconnectandresolveprivileges to your database user orADMINuser for the FQDN of the Google Cloud Filestore instance.BEGIN DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE( host => 'nfs.nfsmount.gcp', -- Your Filestore FQDN ace => xs$ace_type( privilege_list => xs$name_list('connect', 'resolve'), principal_name => 'YOUR_DB_USER', -- Or 'ADMIN' principal_type => xs_acl.ptype_db ) ); END; / - Once the ACLs are set, create a directory object in your Autonomous AI Database instance that points to the NFS mount.
CREATE OR REPLACE DIRECTORY NFS_DIR AS 'nfs_mount'; - Attach the NFS file system using the following PL/SQL block:
BEGIN DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM( file_system_name => 'GCP_NFS', file_system_location => 'nfs.nfsmount.gcp:/nfsshare', directory_name => 'NFS_MOUNT', description => 'Attach GCP NFS', params => JSON_OBJECT('nfs_version' VALUE 3) ); END; / - This procedure creates or links a database directory object. For example,
GCP_NFSto the specified NFS path.
Access the Files
- Run the SQL statement to verify that you can access the files under the directory.
SELECT object_name FROM DBMS_CLOUD.LIST_FILES('NFS_MOUNT'); - Once the directory object is created and associated with the Google Cloud Filestore NFS mount, you can use it in PL/SQL (for example, UTL_FILE), Oracle SQL*Loader, Data Pump, or for creating external tables to read from or write to files on the Google Cloud Filestore share, subject to database user privileges on the directory object.
For more information, see the following resources: