Creating a Globally Distributed Exadata Database on Exascale Infrastructure Resource
A Globally Distributed Exadata Database on Exascale Infrastructure (Distributed ExaDB-XS) resource contains the connectivity and configuration details of the shards and shard catalog databases.
You create the resource in the Globally Distributed Exadata Database on Exascale Infrastructure list page. If you need help finding the list page, see Listing Globally Distributed Exadata Database on Exascale Infrastructure Resources.
-
On the Globally Distributed Exadata Database on Exascale Infrastructure list page, select Create Distributed ExaDB-XS.
-
Provide the following basic information.
Setting Description and Notes Compartment Select a compartment to host the Distributed ExaDB-XS resource
Display name Enter a user-friendly description or other information that helps you easily identify the Distributed ExaDB-XS.
Avoid entering confidential information.
You can modify this name after resource creation.
Database name prefix This prefix is appended to all of the database names in the configuration for ease of use.
Database version Oracle Database release 23ai is supported at this time.
-
Configure the shards in Shards configuration. You can configure them using the map view or list view.
-
On the map view, select the region where you want the database shards to be deployed, then select Configure Shards to enter the settings.
-
In the list view, the settings are presented in the Create Globally Distributed Exadata Database on Exascale Infrastructure page.
Enter the following information:
Setting Description Data distribution Automated - Data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards.
User managed - not currently supported
Shards in region (Shard count in list view) Enter the number of shards you want to appear in the selected region.
You can configure up to 10 shards in the Distributed ExaDB-XS, and then add more later if needed.
Replication type Raft replication creates replication units consisting of sets of chunks and distributes them automatically among the shards to handle chunk assignment, chunk movement, workload distribution, and balancing upon scaling.
Note that Raft replication is supported with Automated data distribution.
Replication factor Select a replication factor that suits your topology.
The replication factor must be less than the shard count.
In Raft replication, the replication factor is the number of replicas in a replication unit. This number includes the primary (leader) member of the unit and its replicas (followers).
Shards list Select + Add Shard to add shards to the list.
Note:
It is recommended that you use one VM cluster per database (shard or catalog).Select Edit in the Actions menu to configure a shard's region placement and VM cluster selection.
Select a VM cluster name in the list to go to its details page.
Additionally, in the list view, you can create more clusters as needed. Select Create VM cluster to go to the Exadata Database Service on Exascale Infrastructure VM Cluster page
-
-
Configure the shard catalog in Catalog configuration.
You can choose to use the same configuration that is applied to the shards, or slide the Same as Shard's configuration toggle and make selections that apply only to the catalog database.
Note that Raft replication type does not apply to the catalog. To configure data protection for the catalog, disable Same as Shard's configuration and configure a Data Guard standby database if you want to enable catalog replication.
Select Edit in the Actions menu to configure the catalog settings described below.
Setting Description Region Select the region to host the catalog
If Data Guard is enabled this is the primary catalog database region.
VM cluster Choose a VM cluster to host the catalog database.
Note:
It is recommended that you use one VM cluster per database (shard, catalog, or catalog standby).Data Guard If enabled, an Oracle Data Guard standby database is instantiated for the catalog in the selected Data Guard region.
Data Guard region Select the region where you have a VM cluster to host the catalog's Data Guard standby database.
Data Guard VM cluster Select a VM cluster available in the selected Data Guard region.
Note:
It is recommended that you use one VM cluster per database (shard, catalog, or catalog standby). -
Configure the remaining settings.
Setting Description and Notes Create administrator credentials Create the ADMIN user that will be able to access all of the shard databases and catalog databases in the configuration.
Encryption key Select the vault and master encryption key that were configured in Task 5. Configure Security Resources.
Select private endpoint Select the private endpoint that was created for this Distributed ExaDB-XS in Common Network Resources. Select character sets Select the Character set and National character set that will be used in all of the shard and shard catalog databases.
The AL32UTF8 character set is recommended by default for character set and the AL16UTF16 character set is recommended by default for National character set.
Select ports Enter the GSM listener port, ONS port (local), and ONS port (remote) .
Note:
The ONS port (remote) number must be unique to each Globally Distributed Database. Do not reuse a port number used in another Globally Distributed Database unless a delete operation is fully processed on the original.Advanced options: Shard configuration - Chunks Under Advanced Options you can optionally configure the number of chunks per shard.
Advanced options: Shard configuration - Replication unit Displays the number of Raft replication units that will be created. A distributed database with Raft replication contains multiple replication units. A replication unit (RU) is a set of chunks that have the same replication topology. Each RU consists of a leader and replicas and those are placed on different shards.
Advanced options: Configure database backups Under Advanced Options you can enable and schedule automated database backups.
See Exadata Database Service on Exascale Infrastructure documentation for information about the settings.
Advanced options: Tags Under Advanced Options you can add tags to the Distributed ExaDB-XS resource. These can also be added after creation.
-
Select Validate to let Distributed ExaDB-XS run validation tests against the configuration.
-
Once any validation errors are addressed and validation is successful, select Create.
Now the Distributed ExaDB-XS display name appears in the list while the creation operation runs.
Creation can take a while, because several tasks are performed as part of the create operation, including host procurement, installing software, and generating certificates for the shard directors (GSMs).
You can monitor the operation status in the State column and select the Distributed ExaDB-XS display name to track progress in the Work requests tab.
When the status of all of the shards on the Shards tab is Available, Distributed ExaDB-XS creation is complete and successful.
Caution:
After creating a Distributed ExaDB-XS, do not move any of its vaults or keys or the Distributed ExaDB-XS will not work.