Case 3: Capturing from Multiple Sources with Mixed Real-time and Archive Mining Mode

The following example captures changes from database DBMS1, DBMS2 and DBMS3 by deploying an Extract at a downstream mining database DBMSCAP.

Note:

This example assumes that you created the necessary standby redo log files as shown in Prepare the Downstream Mining Database to Receive Online Redo Logs.

It assumes the following users:

  • User GGADM1 in DBMS1 whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS1.

  • User GGADM2 in DBMS2 whose credentials Extract will use to fetch data and metadata from DBMS2. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS2.

  • User GGADM3 in DBMS3 whose credentials Extract will use to fetch data and metadata from DBMS3. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at DBMS3.
  • User GGADMCAP in DBMSCAP whose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure was called to grant appropriate privileges to this user at the downstream mining database DBMSCAP.

This procedure also assumes that the downstream mining database is configured in archive mining mode.

In this example, the redo sent by DBMS3 will be mined in real time mode, whereas the redo data sent from DBMS1 and DBMS2 will be mined in archive mining mode.

Prepare the Mining Database to Archive its Local Redo

To prepare the mining database to archive its local redo:

  1. The downstream mining database must be in archive log mode. You can do this by issuing the following DDL:
    STARTUP MOUNT;
    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;
  2. At the downstream mining database, set log_archive_dest_1 to archive local redo.:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local
    VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)'
  3. Enable log_archive_dest_1.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE

Prepare the Mining Database to Accept Redo from the Source Databases

Because redo data is being accepted in the standby redo logs of the downstream mining database, the appropriate number of correctly sized standby redo logs must exist. If you did not configure the standby logs, see Create the Standby Redo Log Files.

  1. At the downstream mining database, set the second archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter as shown in the following example. This is needed to handle archive standby redo logs.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms3
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
  2. Enable the LOG_ARCHIVE_DEST_STATE_2 initialization parameter that corresponds with the LOG_ARCHIVE_DEST_2 parameter as shown in the following example:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
  3. Set DG_CONFIG at the downstream mining database to accept redo data from all of the source databases.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbms2, dbms3, dbmscap)'

Prepare the First Source Database to Send Redo to the Mining Database

To prepare the first source database to send redo to the mining database:

  1. Make certain that DBMS1 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible'; 
  2. Set DG_CONFIG at DBMS1 source database:
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbmscap)';
  3. Set up redo transport at DBMS1 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/arc_dest/dbms1/dbms1_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
  4. Enable the downstream destination:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

Prepare the Second Source Database to Send Redo to the Mining Database

To prepare the second source database to send redo to the mining database:

  1. Make sure that DBMS2 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible';
  2. Set DG_CONFIG at DBMS2 source database:
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms2, dbmscap)';
  3. Set up redo transport at DBMS2 source database. The TEMPLATE clause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
    OPTIONAL NOREGISTER  TEMPLATE='/usr/orcl/arc_dest/dbms2/dbms2_arch_%t_%s_%r.log
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
  4. Enable the downstream destination:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

Prepare the Third Source Database to Send Redo to the Mining Database

To prepare the third source database to send redo to the mining database:

  1. Make sure that DBMS3 source database is running with the required compatibility.
    select name, value from v$parameter where name = 'compatible'; NAME VALUE
        --------- --------------------- compatible 11.1.0.0.0

    The minimum compatibility setting required from capture is 11.1.0.0.0.

  2. Set DG_CONFIG at DBMS3 source database:
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms3, dbmscap)';
  3. Set up redo transport at DBMS3 source database. Because DBMS3 is the source that will send its online redo logs to the standby redo logs at the downstream mining database, do not specify a TEMPLATE clause.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
        OPTIONAL NOREGISTER
        VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap';
  4. Enable the downstream destination:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

Set up Extracts at Downstream Mining Database

These steps set up Extract at the downstream database to capture from the archived logs sent by DBMS1 and DBMS2.

Set up Extract (ext1) to Capture Changes from Archived Logs Sent by DBMS1

Perform the following steps on the DBMSCAP downstream mining database:

  1. Register Extract with DBMSCAP for the DBMS1 source database. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    DBLOGIN USERIDALIAS ggadm1 
    MININGDBLOGIN 
    USERIDALIAS ggadmcap 
    REGISTER EXTRACT ext1 DATABASE
  2. Add Extract at the mining database DBMSCAP:
    ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW
  3. Edit the Extract parameter file ext1.prm. In the credential store, the alias name of ggadm1 is linked to a user connect string of ggadm1@dbms1. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap:
    USERIDALIAS ggadm1 TRANLOGOPTIONS MININGUSERALIAS  
    ggadmcap TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
  4. Start Extract:
    START EXTRACT ext1
Set up Extract (ext2) to Capture Changes from Archived Logs Sent by DBMS2

Perform the following steps on the DBMSCAP downstream mining database:

  1. Register Extract with the mining database for source database DBMS2. In the credential store, the alias name of ggadm2 is linked to a user connect string of ggadm2@dbms2.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    DBLOGIN USERIDALIAS ggadm2 
    MININGDBLOGIN USERIDALIAS ggadmcap
    REGISTER EXTRACT ext2 DATABASE
  2. Create Extract at the mining database:
    ADD EXTRACT ext2 INTEGRATED TRANLOG, BEGIN NOW
  3. Edit the Extract parameter file ext2.prm. In the credential store, the alias name of ggadm2 is linked to a user connect string of ggadm2@dbms2. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap:
    USERIDALIAS ggadm2 TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS
    INTEGRATEDPARAMS (downstream_real_time_mine N)
  4. Start Extract:
    START EXTRACT ext2
Set up Extract (ext3) to Capture Changes in Real-time Mode from Online Logs Sent by DBMS3

Perform the following steps on the DBMSCAP downstream mining database:

  1. Register Extract with the mining database for source database DBMS3. In the credential store, the alias name of ggadm3 is linked to a user connect string of ggadm3@dbms3.The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap.
    DBLOGIN USERID ggadm3
    MININGDBLOGIN USERID ggadmcap
    REGISTER EXTRACT ext3 DATABASE
  2. Create Extract at the mining database:
    ADD EXTRACT ext3 INTEGRATED TRANLOG, BEGIN NOW
  3. Edit the Extract parameter file ext3.prm. In the credential store, the alias name of ggadm3 is linked to a user connect string of ggadm3@dbms3. The alias name of ggadmcap is linked to a user connect string of ggadmcap@dbmscap:
    USERIDALIAS ggadm3 
    TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS
    INTEGRATEDPARAMS (downstream_real_time_mine N)
  4. Start Extract:
    START EXTRACT ext3

Note:

You can create multiple Extracts running in real-time integrated capture mode in the downstream mining database, as long as they all are capturing data from the same source database, such as all capturing for database DBMS3 in the preceding example.