Case 3: Capturing from Multiple Sources with Mixed Real-time and Archive Mining Mode
The following example captures changes from database DBMS1, DBMS2 and
DBMS3 by deploying an Extract at a downstream mining database
DBMSCAP.
Note:
This example assumes that you created the necessary standby redo log files as shown in Prepare the Downstream Mining Database to Receive Online Redo Logs.It assumes the following users:
-
User
GGADM1inDBMS1whose credentials Extract will use to fetch data and metadata from DBMS1. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user atDBMS1. -
User
GGADM2inDBMS2whose credentials Extract will use to fetch data and metadata fromDBMS2. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user at DBMS2. - User
GGADM3inDBMS3whose credentials Extract will use to fetch data and metadata fromDBMS3. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user atDBMS3. -
User
GGADMCAPinDBMSCAPwhose credentials Extract will use to retrieve logical change records from the logmining server at the downstream mining database. It is assumed that theDBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE()procedure was called to grant appropriate privileges to this user at the downstream mining databaseDBMSCAP.
This procedure also assumes that the downstream mining database is configured in archive mining mode.
In this example, the redo sent by DBMS3 will be mined
in real time mode, whereas the redo data sent from DBMS1 and
DBMS2 will be mined in archive mining mode.
Prepare the Mining Database to Archive its Local Redo
To prepare the mining database to archive its local redo:
- The downstream mining database must be in archive log mode. You can
do this by issuing the following
DDL:
STARTUP MOUNT; ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; - At the downstream mining database, set
log_archive_dest_1to archive local redo.:ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE)' - Enable
log_archive_dest_1.ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE
Prepare the Mining Database to Accept Redo from the Source Databases
Because redo data is being accepted in the standby redo logs of the downstream mining database, the appropriate number of correctly sized standby redo logs must exist. If you did not configure the standby logs, see Create the Standby Redo Log Files.
- At the downstream mining database, set the second archive log
destination in the
LOG_ARCHIVE_DEST_ninitialization parameter as shown in the following example. This is needed to handle archive standby redo logs.ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms3 VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' - Enable the
LOG_ARCHIVE_DEST_STATE_2initialization parameter that corresponds with theLOG_ARCHIVE_DEST_2parameter as shown in the following example:ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE - Set
DG_CONFIGat the downstream mining database to accept redo data from all of the source databases.ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbms2, dbms3, dbmscap)'
Prepare the First Source Database to Send Redo to the Mining Database
To prepare the first source database to send redo to the mining database:
- Make certain that DBMS1 source database is running with the required
compatibility.
select name, value from v$parameter where name = 'compatible'; - Set
DG_CONFIG at DBMS1source database:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1, dbmscap)'; - Set up redo transport at
DBMS1source database. TheTEMPLATEclause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/arc_dest/dbms1/dbms1_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'; - Enable the downstream
destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
Prepare the Second Source Database to Send Redo to the Mining Database
To prepare the second source database to send redo to the mining database:
- Make sure that DBMS2 source database is running with the required
compatibility.
select name, value from v$parameter where name = 'compatible'; - Set
DG_CONFIGat DBMS2 source database:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms2, dbmscap)'; - Set up redo transport at
DBMS2source database. TheTEMPLATEclause is mandatory if you want to send redo data directly to foreign archived logs at the downstream mining database.ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/arc_dest/dbms2/dbms2_arch_%t_%s_%r.log VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'; - Enable the downstream
destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
Prepare the Third Source Database to Send Redo to the Mining Database
To prepare the third source database to send redo to the mining database:
- Make sure that DBMS3 source database is running with the required
compatibility.
select name, value from v$parameter where name = 'compatible'; NAME VALUE --------- --------------------- compatible 11.1.0.0.0The minimum compatibility setting required from capture is 11.1.0.0.0.
- Set
DG_CONFIGat DBMS3 source database:ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms3, dbmscap)'; - Set up redo transport at DBMS3 source database. Because DBMS3 is
the source that will send its online redo logs to the standby redo logs at the
downstream mining database, do not specify a
TEMPLATEclause.ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM ASYNC OPTIONAL NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=dbmscap'; - Enable the downstream
destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
Set up Extracts at Downstream Mining Database
These steps set up Extract at the downstream database to capture from the archived logs sent by DBMS1 and DBMS2.
Set up Extract (ext1) to Capture Changes from Archived Logs Sent by DBMS1
Perform the following steps on the DBMSCAP downstream mining database:
- Register Extract with
DBMSCAPfor the DBMS1 source database. In the credential store, the alias name ofggadm1is linked to a user connect string ofggadm1@dbms1.The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap.DBLOGIN USERIDALIAS ggadm1 MININGDBLOGIN USERIDALIAS ggadmcap REGISTER EXTRACT ext1 DATABASE - Add Extract at the mining database
DBMSCAP:ADD EXTRACT ext1 INTEGRATED TRANLOG BEGIN NOW - Edit the Extract parameter file
ext1.prm. In the credential store, the alias name of ggadm1 is linked to a user connect string ofggadm1@dbms1. The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap:USERIDALIAS ggadm1 TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N) - Start Extract:
START EXTRACT ext1
Set up Extract (ext2) to Capture Changes from Archived Logs Sent by DBMS2
Perform the following steps on the DBMSCAP downstream mining database:
- Register Extract with the mining database for source database DBMS2.
In the credential store, the alias name of
ggadm2is linked to a user connect string ofggadm2@dbms2.The alias name of ggadmcap is linked to a user connect string ofggadmcap@dbmscap.DBLOGIN USERIDALIAS ggadm2 MININGDBLOGIN USERIDALIAS ggadmcap REGISTER EXTRACT ext2 DATABASE - Create Extract at the mining
database:
ADD EXTRACT ext2 INTEGRATED TRANLOG, BEGIN NOW - Edit the Extract parameter file
ext2.prm. In the credential store, the alias name of ggadm2 is linked to a user connect string ofggadm2@dbms2. The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap:USERIDALIAS ggadm2 TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N) - Start Extract:
START EXTRACT ext2
Set up Extract (ext3) to Capture Changes in Real-time Mode from Online Logs Sent by DBMS3
Perform the following steps on the DBMSCAP downstream mining database:
- Register Extract with the mining database for source database DBMS3.
In the credential store, the alias name of
ggadm3is linked to a user connect string ofggadm3@dbms3.The alias name of ggadmcap is linked to a user connect string ofggadmcap@dbmscap.DBLOGIN USERID ggadm3 MININGDBLOGIN USERID ggadmcap REGISTER EXTRACT ext3 DATABASE
- Create Extract at the mining
database:
ADD EXTRACT ext3 INTEGRATED TRANLOG, BEGIN NOW - Edit the Extract parameter file
ext3.prm. In the credential store, the alias name of ggadm3 is linked to a user connect string ofggadm3@dbms3. The alias name ofggadmcapis linked to a user connect string ofggadmcap@dbmscap:USERIDALIAS ggadm3 TRANLOGOPTIONS MININGUSERALIAS ggadmcap TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N) - Start Extract:
START EXTRACT ext3
Note:
You can create multiple Extracts running in real-time integrated capture mode in the downstream mining database, as long as they all are capturing data from the same source database, such as all capturing for database DBMS3 in the preceding example.