5 Using Global Data Services (Architectures, Use Cases, Application Development)

5.1 Distributed Databases

In environments where databases are distributed for data residency, global scalability, performance, and availability, GDS acts as an intelligent traffic director and workload coordinator

  • Shard Director and Shard Catalog: GDS acts as a "shard-aware" proxy, intercepting client requests and routing them to the appropriate shard containing the relevant data.
  • Data Locality: GDS ensures that requests are directed to the shard where the data resides, minimizing data movement and optimizing performance.
  • Scalability and Parallelism: GDS facilitates scaling out the database by adding more shards, and it can leverage parallel query capabilities to improve query performance.
  • Simplified Application Development: GDS hides the complexity of the sharding scheme from applications, allowing them to interact with the database as a single logical unit.

Distributed Databases Example Use Cases

  • Global E-commerce Platforms: Stores product data across multiple regions to provide low-latency access to users worldwide.
  • Social Media Networks: Distributes user profiles, posts, and media across geographically dispersed nodes, improving scalability and response time.
  • Financial Services: Manages transactions and customer data across multiple regions while ensuring consistency and high availability.
  • Content Delivery Networks (CDNs): Distributes large datasets, like videos and images, across various data centers to optimize content delivery to end-users.
  • IoT Applications: Processes real-time data from sensors distributed globally, requiring local processing and centralized analytics.

5.1.1 Using Global Data Services with Oracle Sharding

Oracle sharding enables distribution and replication of data across a pool of Oracle databases that share no hardware or software. The pool of databases is presented to the application as a single logical database. Applications elastically scale (data, transactions, and users) to any level, on any platform, simply by adding additional databases (shards) to the pool. Scaling up to 1000 shards is supported.

Oracle Sharding provides superior run-time performance and simpler life-cycle management compared to home-grown deployments that use a similar approach to scalability. It also provides the advantages of an enterprise DBMS, including relational schema, SQL, and other programmatic interfaces, support for complex data types, online schema changes, multi-core scalability, advanced security, compression, high-availability, ACID properties, consistent reads, developer agility with JSON, and much more.

Oracle Globally Distributed Database is built on the Oracle Database Global Data Services feature, so to plan your topology you must understand the Global Data Services architecture and management.

Oracle Globally Distributed Database enables you to deploy a global database, where a single logical database could be distributed over multiple geographies. This makes it possible to satisfy data privacy regulatory requirements (Data Sovereignty) as well as allows to store particular data close to its consumers (Data Proximity).

Data sovereignty generally refers to how data is governed by regulations specific to the region in which it originated. These types of regulations can specify where data is stored, how it is accessed, how it is processed, and the life-cycle of the data. With the exponential growth of data crossing borders and public cloud regions, more than 100 countries now have passed regulations concerning where data is stored and how it is transferred. Personally identifiable information (PII) in particular increasingly is subject to the laws and governance structures of the nation in which it is collected. Data transfers to other countries often are restricted or allowed based on whether that country offers similar levels of data protection, and whether that nation collaborates in forensic investigations.

5.2 Using True Cache with Global Data Services

Oracle True Cache is an in-memory, consistent, and automatically managed SQL cache for Oracle Database. True Cache improves application response time while reducing the load on the database. Automatic cache management and consistency simplify application development, reducing development effort and cost.

You can deploy Oracle True Cache with Oracle Database Global Data Services (GDS) to manage workload routing, dynamic load balancing, and service failover across multiple True Caches and other database replicas.

Global services are functionally similar to the local database application services that are provided by single instance or Oracle Real Application Clusters (Oracle RAC) databases. The main difference between global services and local services is that global services span the instances of multiple databases, whereas local services span the instances of a single database.

This diagram illustrates a basic True Cache configuration with GDS in a single region:

Figure 5-1 True Cache Integration with GDS

True Cache with GDS

In this example, there is one region with two True Caches and one primary database. True Cache reads from the primary database to warm up the cache or when there's a cache miss. After a block is cached, it's updated automatically through redo apply from the primary database. Applications are configured so that frequent reads go to the True Caches and writes and infrequent reads go to the primary database.

The GDS configuration includes the following components:

  • The GDS catalog stores configuration data for the GDS configuration. In this example, the GDS catalog is hosted outside the primary database.
  • Global service managers (GSMs) provide service-level load balancing, failover, and centralized management of services in the GDS configuration. For high availability, the best practice is to include two GSMs in each region.

  • The GDS pool provides a common set of global services to the primary database and True Caches. A region can contain multiple GDS pools, and these pools can span multiple regions.In this example, the SALES pool provides the SALES global service for the primary database and the SALES_TC global service for the True Caches.
  • An Oracle Notification Service (ONS) server is located with each GSM. All ONS servers in a region are interconnected in an ONS server network. The GSMs create runtime load balancing advisories and publish them to the Oracle universal connection pool (UCP) JDBC connection pool through the ONS server network. Applications request a connection from the pool and the requests are sent to True Caches or the primary database depending on the application configuration and the GDS configuration.

True Cache provides the following integrations with Oracle Global Data Services (GDS).

  • GDS provides the -role TRUE_CACHE option for global services.
  • The True Cache application programming model using JDBC SetReadOnly() supports global services.
  • GDS provides load balancing and service failover between multiple True Caches.

Deploying True Cache with Oracle GDS has the following restrictions.

  • When adding True Cache services in GDSCTL, the -failover_primary option requires the patch for bug 36740927.
  • If the application uses the JDBC programming model, both the primary database service and True Cache service names must be fully qualified with the domain name (for example, sales.example.com and sales_tc.example.com). This is because GDS has a default domain name and is different from the database's domain_name parameter. This also limits the fully qualified service name to a maximum of 64 characters.

For more detailed instructions regarding the deployment of Oracle True Cache with Oracle GDS, please see Deploying Oracle True Cache with Oracle Global Data Services

5.3 Supported Replication Technologies and Implementation Architectures

In environments where databases are replicated for high availability, disaster recovery, or read scaling, Oracle GDS provides:

  • Global Service Management: GDS creates a global service abstraction, masking the complexity of the underlying replicated databases from applications.
  • Workload Routing: GDS intelligently routes client requests to the appropriate database instances based on factors like database role (read-write, read-only), region, replication-lag, and resource capacity.
  • Load-balancing: GDS dynamically balances workloads taking into account load conditions and network latency.
  • Failover and Switchover: GDS automates data service failover to standby databases in case of failures, ensuring continuous application availability.
  • Centralized Management: The GDS catalog provides a central point for managing and monitoring the configuration and health of global services and their associated databases.

Replicated Databases Use Cases

  • Disaster Recovery Systems: Replicates a primary database to a standby database for failover in case of a primary system failure.
  • Content Management Systems (CMS): Read-heavy websites (such as content delivery, ecommerce, airline reservation systems) that replicate data for faster access across multiple regions.
  • Business Intelligence (BI) and Analytics: Replicated databases handle complex, read-heavy queries on a replica while offloading the primary database.
  • Customer Relationship Management (CRM) Systems: Replicates customer data for distributed teams, ensuring high availability and quick access to client information.
  • Mobile Applications: Ensures low-latency read access to globally distributed user bases by replicating databases in multiple regions.

5.3.1 Using Oracle Active Data Guard with Global Data Services

Configure sessions to move in a rolling manner for Oracle Active Data Guard reader farm.

Prerequisites

You must have the following in place for this procedure to work correctly.

  • Oracle Active Data Guard configuration using Oracle Database (release 19c or later recommended).

  • Global Data Services (GDS) configuration using global service manager (release 19c or later recommended).

  • A GDS service has been created to run on all Active Data Guard databases in the configuration.

    For example:

    GDSCTL> add service -service sales_sb -preferred_all -gdspool sales
     –role physical_standby -notification TRUE
    GDSCTL> modify service -gdspool sales -service sales_sb -database mts -add_instances
     -preferred mts1,mts2
    GDSCTL> modify service -gdspool sales -service sales_sb -database stm -add_instances
     -preferred stm1,stm2
    GDSCTL> start service -service sales_sb -gdspool sales
  1. Check the current status of the services and related instances to ensure that services can be moved successfully.

    Note that the service should only be available on the source standby database at this point.

    GDSCTL> services
    Service "sales_sb.sales.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "sales%1", name: "mts1", db: "mts", region: "slc", status: ready.
       Instance "sales%2", name: "mts2", db: "mts", region: "slc", status: ready.
  2. Stop services typically (not using the FORCE option) on the source database where connections are to be removed.
    • This step will quiesce the FAN-aware connection pools using FAN.
    • New connections are directed to other instances offering that service, and idle sessions are disconnected from the pool using the services.
    • Existing connections can continue until their work is complete and they are returned to the connection pool.
    GDSCTL> stop service -service sales_sb -database mts -gdspool sales

    Allow an agreed upon time for the sessions to disconnect and relocate, then continue with the next steps.

    Note:

    If you are performing a rolling upgrade of an Active Data Guard reader farm and the services are not running on other Active Data Guard reader nodes, you can complete the service stop on this database before performing the GDSCTL stop service described in this step.
  3. Disconnect long-running sessions after the current query is completed.

    Preferably, long-running queries have been scheduled to stop or are queued before the window when connections are to be moved. This step handles long-running sessions that are still running and now need to be stopped (killed) abruptly.

  4. Log on to the instance that you intend to shut down.
  5. Check V$SESSION to see if any sessions are still connected to the service.
    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;
  6. Run the DBMS_SERVICE.DISCONNECT_SESSION package for the service you stopped earlier.

    For example:

    SQL> exec
     dbms_service.disconnect_session('oltp_work',DBMS_SERVICE.POST_TRANSACTION);
  7. Check V$SESSION again to ensure that sessions have logged off from the service.
    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;
  8. Start the GDS service on the target database and allow sessions to connect.
    GDSCTL>start service -service sales_sb -database stm -gdspool sales
  9. Log on to the target database and check V$SESSION to see sessions connected to the service.
    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;

5.3.2 Using Oracle GoldenGate with Global Data Services

The following Oracle GoldenGate role transition example topology consists of two databases: GG replica1 and GG replica2. Oracle GoldenGate is set up with uni-directional replication, with Extract running initially on GG replica1 and Replicat running initially on GG replica2. The generic steps still apply for bi-directional GoldenGate replicas or downstream mining GoldenGate replicas.

Prerequisites

You must have the following in place for this procedure to work correctly.

  • Oracle GoldenGate configuration that uses Oracle Database (19c or higher recommended)

  • GoldenGate processes should not connect to the source or target database using the GDS service name, but a dedicated TNS alias. Using the GDS service will cause the database connections to terminate prematurely, causing possible data loss.

  • A heartbeat table has been implemented in the GoldenGate source and target databases to track replication latency and ensure the Replicat applied SCN synchronization. The GoldenGate automatic heartbeat table feature should be enabled. Refer to the Oracle GoldenGate Administration Guide for details on the automatic heartbeat table: https://docs.oracle.com/en/middleware/goldengate/core/19.1/gclir/add-heartbeattable.html.
  • Global Data Services (GDS) configuration using global service manager (19c or higher recommended)

  • GDS service has been created so that it can be run on all databases in the GoldenGate configuration.

    For example:

    GDSCTL> add service -service sales_sb -preferred_all -gdspool sales
    GDSCTL> modify service -gdspool sales -service sales_sb -database mts
     -add_instances -preferred mts1,mts2
    GDSCTL> modify service -gdspool sales -service sales_sb -database stm
     -add_instances -preferred stm1,stm2
    GDSCTL> start service -service sales_sb -gdspool sales

Note:

If you are using the lag tolerance option, specify the lag limit for the global service in seconds. Options for add service or modify service are -lag {lag_value | ANY}.
  1. Check the current status of the services and related instances to ensure that they can be moved successfully.

    At this point, the service should only be available on the source database.

    GDSCTL> services
    Service "sales_sb.sales.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "sales%1", name: "mts1", db: "mts", region: "slc", status: ready.
       Instance "sales%2", name: "mts2", db: "mts", region: "slc", status: ready.
  2. Stop services (not using the FORCE option) on the source database where connections are to be removed.
    • This step will quiesce the FAN-aware connection pools using FAN.
    • New connections are directed to other instances offering that service, and idle sessions are disconnected from the pool using the services.
    • Existing connections can continue until their work is complete and they are returned to the connection pool.
    GDSCTL> stop service -service sales_sb -database mts -gdspool sales -force

    Allow an agreed upon time for the sessions to disconnect and relocate, then continue with the next steps. The time to allow for sessions to drain depends on the workload and user transactions for your application.

  3. Disconnect long-running sessions after the current transaction is completed.

    Preferably, long-running batch jobs are scheduled to be stopped or queued before the maintenance window. This step handles long-running sessions that are still running and must be stopped abruptly (e.g., killed). Check with the application developers if these long-running batch jobs are idempotent and recoverable before disconnecting long-running sessions.

  4. Log on to the instance that you intend to shut down, and check V$SESSION to see if any sessions are still connected to the service.
    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;
  5. Run the DBMS_SERVICE.DISCONNECT_SESSION package for the service you stopped earlier.

    For example:

    SQL> exec
     dbms_service.disconnect_session('oltp_work',DBMS_SERVICE.POST_TRANSACTION);
  6. Check V$SESSION again to ensure sessions have logged off from the service.
    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;
  7. When all sessions associated with the GDS service have been disconnected, verify that all data from the GoldenGate source databases have been replicated to the target database.
    • Record the current database SCN from the GoldenGate SOURCE database.

      SQL> SELECT current_scn FROM v$database;
    • On the GoldenGate TARGET database, continue to monitor the Replicat applied SCN using the following query.

      SQL> SELECT lwm_position FROM v$gg_apply_coordinator;
    • When the target LWM_POSITION SCN is greater than the CURRENT_SCN recorded in the first step, it is safe to assume that all transactions have been replicated from the source to the target database. The users can now be switched over to the GoldenGate target database.

The above steps allow for a graceful switchover. However, if this is a failover event where the source database is unavailable, you can estimate the data loss using the steps below.

  1. When using the automatic heartbeat table, use the following query to determine the replication latency.

    SQL> col Lag(secs) format 999.9
    SQL> col "Seconds since heartbeat" format 999.9
    SQL> col "GG Path" format a32
    SQL> col TARGET format a12
    SQL> col SOURCE format a12
    SQL> set lines 140
    SQL> select remote_database "SOURCE", local_database "TARGET", incoming_path "GG Path",
     incoming_lag "Lag(secs)", incoming_heartbeat_age "Seconds since heartbeat" from gg_lag;
    
    SOURCE        TARGET              GG Path                          Lag(secs) Seconds since heartbeat
    ------------  ------------        -------------------------------- --------- -----------------------
             MTS          GDST          EXT_1A ==> DPUMP_1A ==> REP_1A       7.3                     9.0

    The above example shows a possible 7.3 seconds of data loss between the source and target databases.

  2. Start the GDS service on the target database and allow sessions to connect.

    Note that if the application workload can accept a certain level of data lag, it is possible to perform this step much earlier than step two listed above.

    GDSCTL> start service -service sales_sb -database stm -gdspool sales
  3. Log on to the target database and check V$SESSION to see sessions connected to the service.

    SQL> SELECT service_name, count(1) FROM v$session
     GROUP BY service_name ORDER BY 2;

5.3.3 Using RAFT Replication with Global Data Services

Raft replication is a consensus-based replication protocol that facilitates automatic configuration of replication across all shards in a distributed datbase. Raft replication seamlessly integrates with applications, providing transparency in its operation. In case of shard host failures or dynamic changes in the distributed database's composition, Raft replication automatically reconfigures replication settings. The system takes a declarative approach to configure the replication factor, ensuring a specified number of replicas are consistently available.

When Raft replication is enabled, a distributed database contains multiple replication units. A replication unit (RU) is a set of chunks that have the same replication topology. Each RU has multiple replicas placed on different shards. The Raft consensus protocol is used to maintain consistency between the replicas in case of failures, network partitioning, message loss, or delay.

Swift failover is a key attribute of Raft replication, enabling all nodes to remain active even in the event of a node failure. Notably, this feature incorporates an automatic sub-second failover mechanism, reinforcing both data integrity and operational continuity. Such capabilities make this feature well-suited for organizations seeking a highly available and scalable database system.

Oracle GDS provides GDSCTL commands and options to enable and manage Raft replication in a system-managed sharded database.

Enabling Raft Replication

You enable Raft replication when you configure the shard catalog. To enable Raft replication, specify the native replication option in the create shardcatalog command when you create the shard catalog. For example, gdsctl> create shardcatalog ... -repl native

Note:

For more information regarding configuring and deploying RAFT Replication see: Raft Replication Configuration and Management

5.3.4 One GDS Infrastructure for Many Replicated Configurations

Architecture example where databases might be replicated for high availability, disaster recovery, or read scaling.

Figure 5-2 GDS Replicated Configurations Architecture Example

GDS Infrastructure Example for Many Replicated Configurations

5.4 Summary: Replicated Databases vs Distributed Databases

Figure 5-3 Replicated Databases vs Distributed Databases

Summary

5.5 Application Development Considerations

5.5.1 Application Workload Suitability for Global Data Services

Global Data Services (GDS) is best for replication-aware application workloads; it is designed to work in replicated environments. Applications that are suitable for GDS adoption possess any of the following characteristics:

  • The application can separate its work into read-only, read-mostly, and read-write services to use Oracle Active Data Guard or Oracle GoldenGate replicas. GDS does not distinguish between read-only, read-write, and read-mostly transactions. The application connectivity has to be updated to separate read-only or read-mostly services from read-write services, and the GDS administrator can configure the global services on appropriate databases. For example, a reporting application can function directly with a read-only service at an Oracle Active Data Guard standby database.

  • Administrators should be aware of and avoid or resolve multi-master update conflicts to take advantage of Oracle GoldenGate replicas. For example, an internet directory application with built-in conflict resolution enables the read-write workload to be distributed across multiple databases, each open read-write and synchronized using Oracle GoldenGate multi-master replication.

  • Ideally, the application is tolerant of replication lag. For example, a web-based package tracking application that allows customers to track the status of their shipments using a read-only replica, where the replica does not lag the source transactional system by more than 10 seconds.

5.5.2 Using FAN ONS with Global Data Services

Fast Application Notification (FAN) uses the Oracle Notification Service (ONS) for event propagation to all Oracle Database clients, including JDBC, Tuxedo, and listener clients. ONS is installed as part of Oracle Global Data Services, Oracle Grid Infrastructure on a cluster, in an Oracle Data Guard installation, and when Oracle WebLogic is installed. ONS propagates FAN events to all other ONS daemons it is registered with. No steps are needed to configure or enable FAN on the database server side, with one exception: OCI FAN and ODP FAN require that notification be set to TRUE for the service by GDSCTL. With FAN auto-configuration at the client, ONS jar files must be on the CLASSPATH or in the ORACLE_HOME, depending on your client.

General Best Practices for Configuring FCF Clients

Follow these best practices before progressing to driver-specific instructions.

  • Use a dynamic database service. Using FAN requires that the application connects to the database using a dynamic global database service. This is a service created using GDSCTL.

  • Do not connect using the database service or PDB service. These services are for administration only and are not supported for FAN. The TNSnames entry or URL must use the service name syntax and follow best practices by specifying a dynamic database service name. Refer to the examples later in this document.

  • Use the Oracle Notification Service when you use FAN with JDBC thin, Oracle Database OCI, or ODP.Net clients. FAN is received over ONS. Accordingly, in the Oracle Database, ONS FAN auto-configuration is introduced so that FCF clients can discover the server-side ONS networks and self-configure. FAN is automatically enabled when ONS libraries or jars are present.

  • Enabling FAN on most FCF clients is still necessary in the Oracle Database. FAN auto-configuration removes the need to list the global service managers an FCF client needs.

  • Listing server hosts is incompatible with location transparency and causes issues with updating clients when the server configuration changes. Clients already use a TNS address string or URL to locate the global service manager listeners.

  • FAN auto-configuration uses the TNS addresses to locate the global service manager listeners and then asks each server database for the ONS server-side addresses. When there is more than one global service manager FAN auto-configuration contacts each and obtains an ONS configuration for each one.

  • The ONS network is discovered from the URL when using the Oracle Database. An ONS node group is automatically obtained for each address list when LOAD_BALANCE is off across the address lists.

  • By default, the FCF client maintains three hosts for redundancy in each node group in the ONS configuration.

  • Each node group corresponds to each GDS data center. For example, if there is a primary database and several Oracle Data Guard standbys, there are by default three ONS connections maintained at each node group. The node groups are discovered when using FAN auto-configuration.

    With node_groups defined by FAN auto-configuration, and node_groups (the default), more ONS endpoints are not required. If you want to increase the number of endpoints, you can do this by increasing max connections. This applies to each node group. Increasing to 4 in this example maintains four ONS connections at each node. Increasing this value consumes more sockets.

    oracle.ons.maxconnections=4 ONS
  • If the client is to connect to multiple clusters and receive FAN events from them, for example in Oracle RAC with a Data Guard event, then multiple ONS node groups are needed. FAN auto-configuration creates these node groups using the URL or TNS name. If automatic configuration of ONS (Auto-ONS) is not used, specify the node groups in the Oracle Grid Infrastructure or oraaccess.xml configuration files.

5.5.3 Client Side Configuration

As a best practice, multiple global service managers should be highly available. Clients should be configured for multiple connection endpoints where these endpoints are global service managers rather than local, remote, or single client access name (SCAN) listeners. For OCI / ODP .Net clients use the following TNS name structure.

(DESCRIPTION=(CONNECT_TIMEOUT=90)(RETRY_COUNT=30)(RETRY_DELAY=3) (TRANSPORT_CONNECT_TIMEOUT=3)
  (ADDRESS_LIST =
   (LOAD_BALANCE=on)
   (ADDRESS=(PROTOCOL=TCP)(HOST=GSM1)(PORT=1522))
   (ADDRESS=(PROTOCOL=TCP)(HOST=GSM2)(PORT=1522))
   (ADDRESS=(PROTOCOL=TCP)(HOST=GSM3)(PORT=1522)))
  (ADDRESS_LIST=
   (LOAD_BALANCE=on)
   (ADDRESS=(PROTOCOL=TCP)(HOST=GSM2)(PORT=1522)))
 (CONNECT_DATA=(SERVICE_NAME=sales)))

Always use dynamic global database services created by GDSCTL to connect to the database. Do not use the database service or PDB service, which are for administration only not for application usage and they do not provide FAN and many other features because they are only available at mount. Use the latest client driver aligned with the latest or older RDBMS for JDBC.

Use one DESCRIPTION in the TNS names entry or URL Using more causes long delays connecting when RETRY_COUNT and RETRY_DELAY are used. Set CONNECT_TIMEOUT=90 or higher to prevent logon storms for OCI and ODP clients.

5.5.4 Configuring FAN for Java Clients Using Universal Connection Pool

The best way to take advantage of FCF with the Oracle Database JDBC thin driver is to use the Universal Connection Pool (UCP) or WebLogic Server Active GridLink.

Setting the pool property FastConnectionFailoverEnabled on the Universal Connection Pool enables Fast Connection Failover (FCF). Active GridLink always has FCF enabled by default. Third-party application servers, including IBM WebSphere and Apache Tomcat, support UCP as a connection pool replacement.

For more information about embedding UCP with other web servers, see the following technical briefs.

Follow these configuration steps to enable Fast Connection Failover.

  1. The connection URL must use the service name syntax and follow best practices by specifying a dynamic database service name and the JDBC URL structure (above and below).

    All other URL formats are not highly available. The URL may use JDBC thin or JDBC OCI.

  2. If wallet authentication has not been established, remote ONS configuration is needed.

    Set the pool property setONSConfiguration in a property file as shown in the following example. The property file specified must contain an ons.nodes property and, optionally, properties for oracle.ons.walletfile and oracle.ons.walletpassword. An example of an ons.properties file is shown here.

    PoolDataSource pds = PoolDataSourceFactory.getPoolDataSource();
    pds.setConnectionPoolName("FCFSamplePool"); pds.setFastConnectionFailoverEnabled(true);
     pds.setONSConfiguration("propertiesfile=/usr/ons/ons.properties");
    pds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
    pds.setURL("jdbc:oracle:thin@((CONNECT_TIMEOUT=4)(RETRY_COUNT=30)(RETRY_DELAY=3) "+ "
     (ADDRESS_LIST = "+ " (LOAD_BALANCE=on) "+ " ( ADDRESS =
     (PROTOCOL = TCP)(HOST=GSM1)(PORT=1522))) "+ " (ADDRESS_LIST = "+ " (LOAD_BALANCE=on)
     "+ "( ADDRESS = (PROTOCOL = TCP)(HOST=GSM2)(PORT=1522)))"+
     "(CONNECT_DATA=(SERVICE_NAME=service_name)))");
  3. Ensure the pool property setFastConnectionFailoverEnabled=true is set.
  4. The CLASSPATH must contain ons.jar, ucp.jar, and the JDBC driver jar file. For example, ojdbc8.jar.
  5. If you are using JDBC thin with Oracle Database, Application Continuity can be configured to failover the connections after FAN is received.
  6. If the database needs different ONS endpoints than those autoconfigured, the ONS endpoints can be enabled.

    In a situation where multiple clusters exist with auto-ons enabled, auto-ons would generate node lists with the following guidelines:

    For EVERY active nodelist oracle.ons.maxconnections is set to 3 by default, so there is no need to set this explicitly. This example will result in the ONS client trying to maintain six total connections.

5.5.5 Configuring FAN for OCI Clients

OCI clients embed FAN at the driver level so that all clients can use it regardless of the pooling solution. Ideally, both the server and the client would use release 19c or later.

Configuration for SQL*Plus and PHP

  1. Set notification for the service.
  2. For PHP clients only, add oci8.events=On to php.ini.

    Important: If xml is present with events=-false or events are not specified, this disables the usage of FAN. To maintain FAN with SQL*Plus and PHP when oraccess.xml is in use, set events=-true.

  3. On the client side, using a client and Oracle Database, enable FAN in xml.

Configuration for OCI Clients

  1. Tell OCI where to find ONS Listeners Starting.

    The client installation comes with ONS linked into the client library. Using auto-config, the ONS endpoints are discovered from the TNS address. This automatic method is the recommended approach. Like ODP.Net, manual ONS configuration is also supported using oraaccess.xml.

  2. Enable FAN high availability events for the OCI connections.

    To enable FAN you edit the OCI file xml to specify the global parameter events. This file is located in $ORACLE_HOME/network/admin. See Step 3: Ensure That FAN Is Used for more information.

  3. Tell OCI where to find ONS Listeners.

    The client installation comes with ONS linked into the client library. Using auto-config, the ONS endpoints are discovered from the TNS address. This automatic method is the recommended approach. Like ODP.Net, manual ONS configuration is also supported using oraaccess.xml.

  4. Enable FAN on the server for all OCI clients.

    It is still necessary to enable FAN on the database server for all OCI clients (including SQL*Plus).

5.5.6 Controlling Logon Storms

Small connection pools are strongly recommended, but controlling logon storms can be done with tuning when you have many connections.

Oracle MAA recommends the following tuning on servers that host Global Service Managers.

  1. Increase the Listen backlog at the OS level.

    To have the new value take effect without rebooting the server, perform the following as root.

    echo 8000 > /proc/sys/net/core/somaxconn
  2. To persist the value across reboots, add this setting to /etc/sysctl.conf.

    net.core.somaxconn=6000
  3. Increase queuesize for the global service manager listener.

    Update ora in Oracle home that the listeners are running from to increase the queuesize parameter:

    TCP.QUEUESIZE=6000