TRANLOGOPTIONS
Valid For
Extract
Description
Use the TRANLOGOPTIONS
parameter to control the way that Extract interacts with the transaction log or with the API that passes transaction data, depending on the database or capture mode. You can use multiple TRANLOGOPTIONS
statements in the same parameter file, or you can specify multiple options within the same TRANLOGOPTIONS
statement, if permissible for those options.
Use a given TRANLOGOPTIONS
option only for the database or databases for which it is intended.
Default
None
Syntax
TRANLOGOPTIONS {
[ALLOWTABLECOMPRESSION][ALTLOGDEST path | REMOTE]
[ALWAYSONREADONLYROUTING]
[{DBLOGREADERBUFSIZE size}][ASYNCTRANSPROCESSING buffer_size]
[BUFSIZE size]
[CHECKPOINTRETENTIONTIME days
][DB2APIRETRY retry_count]
[DB2ZV11COMPATIBILITYMODE][DICTIONARY_CACHE_SIZE value]
[DLFAILOVER_TIMEOUT seconds]
[DISABLESOFTEOFDELAY]
[EXCLUDETAG [tag
| NULL] | [EXCLUDETAG +]
[EXCLUDETRANS transaction
]
[EXCLUDEUSER user]
[EXCLUDEUSERID Oracle_uid]
[FAILOVERTARGETDESTID n][FETCHPARTIALJSON][FETCHPARTIALLOB][FETCHPARTIALXML]
[FILTERTABLE table]
[FORCEFETCHLOB]
[GETCTASDML | NOGETCTASDML][HANDLEDLFAILOVER [STANDBY_WARNING value | STANDBY_ABEND value]]
[IFILOCKSECONDS (seconds)]
[IGNOREDATACAPTURECHANGES | NOIGNOREDATACAPTURECHANGES][INCLUDEAUX (AUX_specification
)]
[INCLUDEREGIONID | INCLUDEREGIONIDWITHOFFSET]
[INCLUDETAG tag]
[ENABLE_PROCEDURAL_REPLICATION Y]
[ENABLE_AUTO_CAPTURE | DISABLE_AUTO_CAPTURE]
[LOB_CHUNK_SIZE size][MAXAUTOCMTTRANSSIZE (range, default)][MININGUSER {/ | user}[, MININGPASSWORD password]
[algorithm ENCRYPTKEY {key_name | DEFAULT}] [SYSDBA]
[MININGUSERALIAS alias
[DOMAIN domain
]]
[MIXEDENDIAN [ON|OFF]]
[MANAGECDCCLEANUP | NOMANAGECDCCLEANUP ]
[MANAGESECONDARYTRUNCATIONPOINT | NOMANAGESECONDARYTRUNCATIONPOINT ]
[PERFORMANCEPROFILE HIGH|MEDIUM|LOW_RES][QUERYTIMEOUT seconds]
[QUERYRETRYCOUNT seconds]
[READQUEUESIZE size]
[READTIMEOUT milliseconds]
[REDO_TRANSPORT_LAG_THRESHOLD seconds]
[REDO_TRANSPORT_LAG_TIMEOUT value]
[REQUIRELONGDATACAPTURECHANGES | NOREQUIRELONGDATACAPTURECHANGES]
[SOURCE_OS_TIMEZONE timezone]
[SKIPUNKNOWNEVENT]
[SUPPRESSNOOOPUPDATES][TRACKSCHEMACHANGES ]
[TRANCOUNT integer]
[TSLOOKUPBEGINLRI | TSLOOKUPENDLRI]
[VALIDATEINLINESFLOB]
[USE_ROOT_CONTAINER_TIMEZONE]
[USENATIVEOBJSUPPORT | NOUSENATIVEOBJSUPPORT]
[VERSIONCHECK DYNAMIC | IMMEDIATE]
}
-
ALWAYSONREADONLYROUTING
- Valid for SQL Server
The
ALWAYSONREADONLYROUTING
parameter allows Extract for SQL Server to route its read-only processing to an available read-intent Secondary when connected to an Always On availability group listener.
-
ALTLOGDEST
path | REMOTE
-
Valid for MySQL.
Specifies the location of the MySQL log index file. Extract looks for the log files in this location instead of the database default location.
ALTLOGDEST
can be used when the database configuration does not include the full path name to the logs or when there are multiple MySQL installations on the machine. Extract reads the log index file to find the binary log file that it needs to read. WhenALTLOGDEST
is used, Extract assumes that the logs and the index are in the same location.Supply the full path name to the directory.
On Windows, enclose the path within double quotes if the path contains any spaces, such as in the following example.
TRANLOGOPTIONS ALTLOGDEST "C:\Program Files\MySQL\MySQL Server 5.7\log\binlog.index"
On Linux system:
TRANLOGOPTIONS ALTLOGDEST "/mnt/rdbms/mysql/data/logs/binlog.index"
When capturing against a remote MySQL database, use the
REMOTE
option instead of the index file path. From remote capture, specify the following in the Extract parameter file.TRANLOGOPTIONS ALTLOGDEST REMOTE
For more information on using the
REMOTE
option, see Setting Logging Parameters. -
ASYNCTRANSPROCESSING
buffer_size
-
Valid for Extract in integrated capture mode for Oracle.
Controls whether integrated capture runs in asynchronous or synchronous processing mode, and controls the buffer size when Extract is in asynchronous mode. The minimum is 1 and the maximum is 1024; the default is 300.
-
ASYNCTRANSPROCESSING
buffer_size
-
In asynchronous transaction processing mode, there are two threads of control:
-
One thread groups logical change records (LCR) into transactions, does object-level filtering, and does partial rollback processing,
-
The other thread formats committed transactions, performs any user-specified transformations, and writes to the trail file.
The transaction buffer is the buffer between these two threads and is used to transfer work from one thread to the other. The default transaction buffer size is 300 committed transactions, but is adjusted downward by the Oracle GoldenGate memory manager if its cache memory is close to being exhausted.
-
-
NOASYNCTRANSPROCESSING
-
Disables asynchronous processing and causes Extract to operate in synchronous mode. In this mode, one thread performs all capture work.
-
-
BUFSIZE
size
-
Valid for DB2 LUW, and DB2 z/OS. Valid for DB2 for i from Oracle GoldenGate 19c and higher. Valid for Oracle database from Oracle GoldenGate 21c and higher.
Controls the maximum size, in bytes, of the buffers that are allocated to contain the data that is read from the transaction log.
High values increase capture speed but cause Extract to consume more memory. Low values reduce memory usage but increase I/O because Extract must store data that exceeds the cache size to disk.
For Oracle database, the DDL operation record size is limited by Oracle GoldenGate internal record capture buffer size. The DDL size can be up to the Oracle RDBMS size limit. Although Oracle database 21c allows creating DDL greater than 10MB, the maximum internal record capture buffer size is limited to 10MB.
The default buffer size is determined by the source of the redo data. The following are the valid ranges and default sizes, in bytes:
DB2 LUW:
-
Minimum: 8,192
-
Maximum: 10,000,000
-
Default: 204,800
-
The preceding values must be in multiples of the 4096 page size. Extract will truncate to a multiple if a given value does not meet this requirement.
DB2 z/OS and DB2 for i:
-
Minimum: 36KB (36864)
-
Maximum: 32MB (33554432)
-
Default: 2MB (2097152)
-
The preceding values must be in multiples of the 4096 page size. Extract will truncate to a multiple if a given value does not meet this requirement.
-
Each Extract uses a fixed 32bytes of ECSA on the DB2 z/OS system that the Extract connects to. This doesn't apply to DB2 for i.
-
-
CHECKPOINTRETENTIONTIME
days
-
Valid for Extract in integrated mode only for Oracle.
Controls the number of days that Extract retains checkpoints before they are purged. Partial days can be specified using decimal values. For example, 8.25 specifies 8 days and 6 hours. When the checkpoint of an Extract in integrated capture mode is purged, LogMiner data dictionary information for the archived redo log file that corresponds to the checkpoint is purged, and the
first_scn
value of the capture process is reset to the SCN value corresponding to the first change in the next archived redo log file. The default is seven days and the minimum is 0.00001. -
DB2APIRETRY number of retries
- If Extract receives an error from the DB2 log reading API
db2ReadLog(),
then for certain errors the API call is retried. Use theDB2APIRETRY
to change the number of retries. The default number of retries is set to 3. SQL code for which the API is retried isSQLCODE -30108
. -
DB2ZV11COMPATIBILITYMODE
-
Valid for Extract for DB2 z/OS.
When using Oracle GoldenGate to extract from DB2 z/OS version 11 in some compatibility modes, the Extract process may not programmatically determine the actual database version and an OGG-00551 or OGG-00804 error occurs. Use this option in your Extract parameter file to manually set the correct database version.
-
DICTIONARY_CACHE_SIZE value
-
Use this option to tune dictionary cache size from Extract. The default value is 5000. If
PERFORMANCEPROFILE
is set toHIGH
, then the default value is 10000. -
DLFAILOVER_TIMEOUT seconds
-
Valid for Extract in integrated mode for Oracle.
Provides a configurable timeout in seconds to allow for standby database reinstatement post-role transition. It is used in conjunction with
HANDLEDLFAILOVER
to allow Integrated Extract to start up immediately after a role transition. At the end of the timeout period, if the standby database is still not available, then Extract will terminate.The default is 300 seconds. You can also use centiseconds or milliseconds.
-
DISABLESOFTEOFDELAY
-
Valid for Extract only in integrated mode for Oracle and DB2 LUW.
Use
DISABLESOFTEOFDELAY
in the Extract parameter file to set that the wait time takes effect when the an EOF status is reported with no records to return. -
[EXCLUDETAG
[tag
| NULL] | [EXCLUDETAG +]
-
Use
EXCLUDETAG
tag
to direct the Extract process to ignore the individual records that are tagged with the specified redo tag. Compare with older versions, new trail file contains tag tokens, which would not introduce problems for older trail readers.Use
EXCLUDETAG
+ to direct the Extract process to ignore the individual records that are tagged with any redo tag.The
EXCLUDETAG
is used to exclude changes that were earlier tagged either by Replicat using theDBOPTIONS SET TAG
option or within the Oracle database session using thedbms_xstream.set_tag
procedure. - Example
- The following are examples of how to use tag specifiers with
EXCLUDETAG
. -
EXCLUDETRANS
transaction
-
Valid for Integrated Extract for Oracle.
Specifies the transaction name of the Replicat database user or any other user so that those transactions are not captured by Extract. Use for bi-directional processing to prevent data looping between the databases.
For more information about bidirectional synchronization, see Configuring Bi-Directional Replication in Oracle GoldenGate Microservices Documentation.
-
EXCLUDEUSER
user
-
Valid for Db2 LUW, Db2 for z/OS, Db2 for i, and Oracle.
Specifies the name of the Replicat database user, or of any other user, to be used as a filter that identifies transactions that will be subject to the rules of the
GETREPLICATES
orIGNOREREPLICATES
parameter. Typically, this option is used to identify Replicat transactions in a bi-directional or cascading processing configuration, for the purpose of excluding or capturing them. However, it can be used to identify transactions by any other user, such as those of a specific business application.You can use
EXCLUDEUSER
andEXCLUDEUSERID
in the same parameter file. Do not use wildcards in either parameter.The user name must be valid. Oracle GoldenGate queries the database to get the associated user ID and maps the numeric identifier back to the user name. For this reason, if the specified user is dropped and recreated while name resolution is set to the default of
DYNAMICRESOLUTION
,EXCLUDEUSER
remains valid. If the same transaction is performed when name resolution is set toNODYNAMICRESOLUTION
,EXCLUDEUSER
becomes invalid, and Extract must be stopped and then started to makeEXCLUDEUSER
take effect, seeDYNAMICRESOLUTION
.-
DB2 z/OS considerations: In DB2 for z/OS, the user is always the primary authorization ID of the transaction, which is typically that of the original RACF user who logged on, but also could be a different authorization ID if changed by a transaction processor or by DB2 exits.
-
Oracle considerations: For an Oracle database, multiple
EXCLUDEUSER
statements can be used. All specified users are considered the same as the Replicat user, in the sense that they are subject to the rules ofGETREPLICATES
orIGNOREREPLICATES
. You must include theIGNOREAPPLOPS
parameter forEXCLUDEUSER
to operate correctly unlike all other supported databases.EXLCUDEUSER
is not supported for multitenant source databases.
-
- Example
- The following Oracle example filters for two users (one by name and one by user
ID). The transactions generated by these users will be handled according to the
GETREPLICATES
orIGNOREREPLICATES
rules, and a new transaction buffer size is specified.TRANLOGOPTIONS EXCLUDEUSER ggsrep, EXCLUDEUSERID 90, BUFSIZE 100000
-
EXCLUDEUSERID
Database_uid
-
Valid for Extract for Oracle.
Specifies the database user ID (
uid
) of the Replicat database user, or of any other user, to be used as a filter that identifies transactions that will be subject to the rules of theGETREPLICATES
orIGNOREREPLICATES
parameter. This parameter is not valid for multitenant Extracts. Use tagging andEXCLUDETAG
instead.Usage is the same as that of
EXCLUDEUSER
.Oracle_uid
is a non-negative integer with a maximum value of 2147483638. There are several system views that can be queried to get the user ID. The simplest one is theALL_USERS
view. Oracle GoldenGate does not validate the user ID. If the user that is associated with the specified user ID is dropped and recreated, a new user ID is assigned; therefore,EXCLUDEUSERID
becomes invalid for that user. -
FAILOVERTARGETDESTID
n
-
Valid for Extract for Oracle.
When using Oracle GoldenGate Extract processes in an Oracle Data Guard configuration, the GoldenGate Extract process must remain behind the redo that has been applied to the Oracle Data Guard standby database. The
FAILOVERTARGETDESTID
parameter is used to identify theLOG_ARCHIVE_DEST_n
initialization parameter which points to the standby, that is the failover target which Extract must remain behind. This parameter is used in combination withHANDLEDLFAILOVER
to control whether Extract will throttle its writing of trail data based on the apply progress of the Oracle Data Guard standby database. Note that theFAILOVERTARGETDESTID
is not needed if the Data Guard configuration has Fast Start Failover (FSFO) enabled. The minimum value is 0, the maximum is 32 and the default 0. - Example
-
To determine the correct value for the
TRANLOGOPTIONS FAILOVERTARGETDESTID
Extract parameter, connect to the database from which Extract is extracting data from, and issue the following command.SQL> show parameters log_archive_dest NAME TYPE VALUE ----------------------------------------------------------------------------- log_archive_dest_1 string location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES) log_archive_dest_2 string service="ggs2d", ASYNC NOAFFIRM delay=0 optional compression =disable max_failure=0 max_connections=1 reopen=300 db_unique_name="GGS2D" net_timeout=30, valid_for=(online_logfile,all_roles)
-
FETCHPARTIALJSON
-
Valid for Extract for MySQL
Use this option in the Extract parameter file to directly fetch data from the table, if there are partial updates to the
JSON
datatype columns of a table.Note:
ProcessingJSON
column data updates depends on the value of the MySQL server variable,binlog_row_value_options
, the value of which needs to be set asPARTIAL_JSON
and the Extract parameter file includes theFETCHPARTIALJSON
parameter. -
FETCHPARTIALLOB
-
Valid for Extract in integrated capture mode for Oracle.
Use this option when replicating to a heterogeneous target or in other conditions where the full LOB image is required. It causes Extract to fetch the full LOB object, instead of using the partial change object from the redo record. By default, the database logmining server sends Extract a whole or partial LOB, depending on whether all or part of the source LOB was updated. To ensure the correct snapshot of the LOB, the Oracle Flashback feature must be enabled for the table and Extract must be configured to use it. The Extract
FETCHOPTIONS
parameter controls fetching and must be set toUSESNAPSHOT
(the default in the absence ofNOUSESNAPSHOT
). Without a Flashback snapshot, Extract fetches the LOB from the table, which may be a different image from the point in time when the redo record was generated. -
FETCHPARTIALXML
-
Valid for Extract in integrated capture mode Oracle.
Use this option when replicating to a heterogeneous target or in other conditions where the full LOB image is required. It causes Extract to fetch the full XML document, instead of using the partial change image from the redo record. By default, the database logmining server sends Extract a whole or partial XML document, depending on whether all or part of the source XML was updated. To ensure the correct snapshot of the XML, the Oracle Flashback feature must be enabled for the table and Extract must be configured to use it. The Extract
FETCHOPTIONS
parameter controls fetching and must be set toUSESNAPSHOT
(the default in the absence ofNOUSESNAPSHOT
). Without a Flashback snapshot, Extract fetches the XML document from the table, which may be a different image from the point in time when the redo record was generated. -
FILTERTABLE
table
-
Valid for Extract for MySQL, PostgreSQL, and SQL Server.
Use this option to identify a source transaction for filtering. If a source transaction includes any operation for the specified
FILTERTABLE
, then that transaction is identified as a replicated transaction. Transaction filtering is based on theGETREPLICATES
/IGNOREREPLICATES
andGETAPPLOPS
/IGNOREAPPLOPS
parameters. The default isIGNOREREPLICATES
, so all the replicated transactions are ignored, by default for the filter table.This option may be used to avoid data looping in a bidirectional configuration of Oracle GoldenGate by specifying
FILTERTABLE
as the fully qualified name of the checkpoint table used by the target Replicat. When a Replicat uses a checkpoint table, it writes a recovery record in the checkpoint table at the end of each transaction that it applies. Considering that all transactions applied by the Replicat contain an update to the checkpoint table, the Extract ignores the entire transaction applied by the Replicat, which prevents data looping. For PostgreSQL and SQL Server, ensure thatTRANDATA
has been added for the checkpoint table.If using a parallel Replicat in a bidirectional replication for MySQL and PostgreSQL, then multiple filter tables are supported using the
TRANLOGOPTIONS FILTERTABLE
option. Multiple filter tables allow theTRANLOGOPTIONS FILTERTABLE
to be specified multiple times with different table names or wildcards.You can include single or multipleTRANLOGOPTIONS FILTERTABLE
entries in the Extract parameter file. In the following example, multipleTRANLOGOPTIONS FILTERTABLE
entries are included in the Extract parameter file with explicit object names and wildcards.TRANLOGOPTIONS FILTERTABLE ggs.chkpt2 TRANLOGOPTIONS FILTERTABLE ggs.chkpt_RABC_*
For information about creating a checkpoint table, see Creating a Checkpoint Table in Administering Oracle GoldenGate. To specify object names and wildcards correctly, see Using Wildcards in Database Object Names in Administering Oracle GoldenGate.
Specify the fully qualified name of the filter table. It is used to avoid looping in the bi-directional scenario.
- Example
-
TRANLOGOPTIONS FILTERTABLE ggschema.repcheckpointtable
-
FORCEFETCHLOB
-
Valid for Extract in classic and integrated capture modes for Oracle.
Overrides the default behavior of capturing LOB data from the redo log. Causes LOBs to be fetched from the database by default.
Caution:
If a value gets deleted before the fetch occurs, Extract writes a null to the trail. If a value gets updated before a fetch, Extract writes the updated value. To prevent these inaccuracies, try to keep Extract latency low. The Oracle GoldenGate documentation provides guidelines for tuning process performance. Also, see Interactions Between Fetches from a Table and DDL in Using Oracle GoldenGate for Oracle Databasefor instructions on setting fetch options. -
GETCTASDML | NOGETCTASDML
-
Enables Create Table As Select (CTAS) functionality. When
GETCTASDML
is enabled, CTAS DMLs are sent from LogMiner and replicated on the target. This option is enabled by default. Execution of the CTAS DDL is suppressed on the target. This parameter cannot be enabled while using the DDL metadata trigger. Trail files produced with the CTAS functionality enabled cannot be consumed by a Replicat version lower than 12.1.2.1.0.Use
GETCTASDML
to allow CTAS to replay the inserts of the CTAS thus preserving OIDs during replication. This parameter is only supported with Integrated Dictionary and any downstream Replicat must be 12.1.2.1 or greater to consume the trail otherwise, there may be divergence. -
HANDLEDLFAILOVER [ STANDBY_WARNING value | STANDBY_ABEND value ]
-
Valid for Extract for Oracle
STANDBY_WARNING
andSTANDBY_ABEND
valid for Oracle Database 21c and higher.Controls whether Extract will throttle its writing of trail data based on the apply progress of the Fast Start Failover standby database. It is intended to keep Extract at a safe point behind any data loss failover.
When using this for data loss in a Data Guard configuration without Fast Start Failover (FSFO), you must set the
FAILOVERTARGETDESTID
Extract parameter to identify the archive log destination ID to where the standby can be connected.Extract is found to be in a stalled state when Extract queries the standby database apply SCN information (SELECT applied_scn FROM v$archive_dest where dest_id=n) and this SCN is less than Extract processing LCR SCN. In this case, Extract will not process the LCR and waits until the applied_scn becomes greater than or equal to Extract processing LCR SCN.
-
STANDBY_WARNING value
- The amount of time before a warning message is written to the Extract report file, if Extract is stalled. The default is 60 seconds.
-
STANDBY_ABEND value
- The amount of time before Extract abends, if Extract is stalled. The default is 30 minutes.
If both
STANDBY_WARNING
andSTANDBY_ABEND
are specified,STANDBY_ABEND
should always be greater thanSTANDBY_WARNING
. -
-
IFILOCKSECONDS seconds
-
Valid for DB2 z/OS
Sets the interval in seconds, for which the Extract holds the implicit locks held in the database by the calls to IFCID 0306. The locks can affect the ability to perform certain database operations such as
REORGS
. The default value is 20 seconds,with minimum and maximum values as 1 second and 300 seconds, respectively.Note:
If the
IFILOCKSECONDS
parameter is set for a longer duration, other database operations such asREORGS
, can be impacted due to internal locking caused by the Extract IFI calls. Therefore, if any lock contention occurs with relation to an Extract, either set the lock timeout for the operation to a duration longer than the value of theIFILOCKSECONDS
parameter, or shut down the Extract. -
IGNOREDATACAPTURECHANGES | NOIGNOREDATACAPTURECHANGES
-
Valid for DB2 LUW
Controls whether or not Extract captures tables for which
DATA CAPTURE CHANGES
is not set.IGNOREDATACAPTURECHANGES
ignores tables for whichDATA CAPTURE CHANGES
is not set. Use if tables were specified with a wildcard to ensure that processing continues for tables that do have change capture set. A warning is issued to the error log for tables that were skipped. The default isNOIGNOREDATACAPTURECHANGES
. -
INCLUDEREGIONID | INCLUDEREGIONIDWITHOFFSET
-
Valid for Extract in integrated mode for Oracle only.
These options support the Oracle data type
TIMESTAMP WITH TIME ZONE
specified asTZR
(which represents the time zone region, such asUS/Pacific
). By default, Extract abends onTIMESTAMP WITH TIME ZONE
if it includes a time zone region. These options enable you to handle this timestamp based on the target database type.When Extract detects that the source data type is
TIMESTAMP
and there is a region ID mapping token, Replicat applies the timestamp as follows:-
A
TIMESTAMP WITH TIME ZONE
withTZR
is applied if the target Oracle version supports it. -
A timestamp with a UTC offset is applied to a heterogeneous database, or to an earlier version of Oracle that does not support
TIMESTAMP WITH TIME ZONE
withTZR
.
-
INCLUDEREGIONID
-
Valid for Oracle Integrated Extract only.
The
INCLUDEREGIONID
is deprecated for Oracle GoldenGate 19c (19.1.0). From Oracle GoldenGate 19c (19.1.0) onward,TIMESTAMP WITH TIME ZONE
with region ID data is included by default including initial load.Use when replicating from an Oracle source to an Oracle target of the same version or later. When
INCLUDEREGIONID
is specified, Extract adds a column index and the two-byteTMZ
value as a time-zone mapping token and outputs it to the trail in the UTC format ofYYYY-MM-DD HH:MI.SS.FFFFFF +00:00
. -
INCLUDEREGIONIDWITHOFFSET
-
Valid for Oracle Integrated Extract only.
Use this option to convert region ID to hour and minutes offset value (+06:00 as example). If the option is not specified, then the timestamp is always written to the trail file in UTC and the time zone is always +00:00.
If you need to preserve the time zone value in hour and minutes instead of UTC, then this option can be used.
In the following cases, the option is forced to turn on to preserve the TIMEZONE value in hour and minutes offset:-
Old trail file format because Replicat does not support region ID.
-
XML
,TEXT
, andSQL
format because they don't support region ID.
-
-
-
INCLUDETAG tag
-
Valid for integrated Extract.
Use
INCLUDETAG
tag
to include specific changes trail files. The tag value can be up to 2000 hexadecimal digits (0-9 A-F).Note:
FFFF and + (plus symbol) are not supported for tag usage.To avoid conflicts, don't use
INCLUDETAG
in conjunction withEXCLUDETAG
.Example:
tranlogoptions includetag 00
-
LOB_CHUNK_SIZE
-
Valid for SQL Server, PostgreSQL.
If you have huge LOB data sizes, then you can adjust the
LOB_CHUNK_SIZE
from the default of 4000 bytes, to a higher value up to 65535 bytes, so that the fetch size is increased, reducing the trips needed to fetch the entire LOBExample:
TRANLOGOPTIONS LOB_CHUNK_SIZE 8000
(PostgreSQL) Specifies the size of chunk for the
LOB
(CLOB
/BLOB
) data that will be used to push inCOM
. It's unit is in bytes. The minimum and maximumlob_chunk_size
values lies between 4000 to 65535 bytes. -
INTEGRATEDPARAMS (
parameter
value
[, ...])
-
Valid for Extract in integrated capture mode for Oracle Standard or Enterprise Edition 12c or later.
Passes parameters and values to the Oracle Database logmining server when Extract is in integrated capture mode. The input must be in the form of
parameter value
, as in:TRANLOGOPTIONS INTEGRATEDPARAMS (downsream_real_time_mine Y)
Valid
parameter
specifications and their values are the following:-
max_sga_size
-
Specifies the amount of SGA memory that is used by the database logmining server. Can be a positive integer in megabytes. The default is 1 GB if
streams_pool_size
is greater than 1 GB; otherwise, it is 75% ofstreams_pool_size
. -
parallelism
-
Specifies the number of processes supporting the database logmining server. Can be a positive integer. The default is 2.
-
downstream_real_time_mine
-
Specifies whether or not integrated capture mines a downstream mining database in real-time mode. A value of
Y
specifies real-time capture and requires standby redo logs to be configured at the downstream mining database. A value ofN
specifies capture from archived logs shipped to the downstream mining database. The default isN
. -
enable_procedural_replication
-
Enables procedural replication at capture. Procedural replication is disabled by default. A value of
Y
enables procedural replication. Once this option is turned on for an Extract, it remains on. The parameter value can not be toggled back.
-
-
ENABLE_AUTO_CAPTURE | DISABLE_AUTO_CAPTURE
-
Set this option to enable auto capture mode, which would deliver LCRs of tables enabled for automatic capture. This option can be set when the source database's Oracle binary version is 21c or higher.
-
MAXAUTOCMTTRANSSIZE (range, default)
-
Valid for DB2 for i only
Provides the range of the maximum autocommited transaction size.
DB2 for i autocommited records (journal entry has CCID equal to 0) do not have a commit record in the journal and therefore Oracle GoldenGate must create an implicit transaction to include these records in the trail. The default allows for a single record to be included in a single transaction, which maintains the accuracy of the indicated IO Time for each record because the IO time is based on the commit for the transaction.
This parameter sets the maximum number of records that will be included in an implicitly created transaction, but the number could be less if any other type of entry is seen in the journal before the maximum is reached. This behavior avoids issues with overlap for checkpoints on records that belong to explicitly committed records.
Setting the value for this parameter to 1 (the defulat) will provide an accurate IO time for each record in the trail for records that are autocommitted (have a CCID of 0 in the journal entry), at the potential expense of throughput for the Extract. The value of this parameter also affects the maximum potential size of a cached transaction for these records in memory. Setting it to a lower value causes the transaction memory to be lower if the Extract is able to store the maximum number of entries per implicit transaction. By definition there can only be one such implicit transaction in memory at any given time since any other transaction records will cause an immediate commit to the trail of any records in an implicit transaction already in memory.
The default range is between 1-10000 and the default value is 1.
-
MININGUSER {/ |
user
} [, MININGPASSWORD
password]
-
[
algorithm
ENCRYPTKEY {
key_name
| DEFAULT}] [SYSDBA]]
-
Valid for Extract in integrated capture mode for Oracle.
Specifies login credentials for Extract to log in to a downstream Oracle mining database to interact with the logmining server. Can be used instead of the
MININGUSERALIAS
option if an Oracle GoldenGate credential store is not being used.This user must:
-
Have the privileges granted in
dbms_goldengate_auth.grant_admin_privilege
. -
Be the user that issues the
MININGDBLOGIN
orMININGDBLOGINALIAS
andREGISTER EXTRACT
orUNREGISTER EXTRACT
commands for the Extract group that is associated with thisMININGUSERALIAS.
-
Not be changed while Extract is in integrated capture mode.
-
/
-
Directs Oracle GoldenGate to use an operating-system login for Oracle, not a database user login. Use this argument only if the database allows authentication at the operating-system level. Bypassing database-level authentication eliminates the need to update Oracle GoldenGate parameter files if application passwords frequently change.
To use this option, the correct user name must exist in the database, in relation to the value of the Oracle
OS_AUTHENT_PREFIX
initialization parameter. The value set withOS_AUTHENT_PREFIX
is concatenated to the beginning of a user's operating system account name and then compared to the database name. Those two names must match.When
OS_AUTHENT_PREFIX
is set to' '
(a null string), the user name must be created withIDENTIFIED EXTERNALLY
. For example, if the OS user name isogg
, you would use the following to create the database user:CREATE USER ogg IDENTIFIED EXTERNALLY;
When
OS_AUTHENT_PREFIX
is set toOPS$
or another string, the user name must be created in the format of:OS_AUTHENT_PREFIX_value OS_user_name
For example, if the OS user name is
ogg
, you would use the following to create the database user:CREATE USER ops$ogg IDENTIFIED BY oggpassword;
-
user
-
Specifies the name of the mining database user or a SQL*Net connect string.
-
password
-
The user's password. Use when database authentication is required to specify the password for the database user. If the password was encrypted by means of the
ENCRYPT PASSWORD
command, supply the encrypted password; otherwise, use the clear-text password. If the password is case-sensitive, type it that way. If either the user ID or password changes, the change must be made in the Oracle GoldenGate parameter files, including the re-encryption of the password if necessary. -
algorithm
-
Specifies the encryption algorithm that was used to encrypt the password with
ENCRYPT PASSWORD
. Can be one of:AES128 AES192 AES256 BLOWFISH
-
ENCRYPTKEY {
key_name
| DEFAULT}
-
Specifies the encryption key that was specified with
ENCRYPT PASSWORD
.-
ENCRYPTKEY
key_name
specifies the logical name of a user-created encryption key in theENCKEYS
lookup file. Use ifENCRYPT PASSWORD
was used with theKEYNAME
key_name
option. -
ENCRYPTKEY DEFAULT
directs Oracle GoldenGate to use a random key. Use ifENCRYPT PASSWORD
was used with theKEYNAME DEFAULT
option.
-
-
SYSDBA
-
Specifies that the user logs in as
sysdba
.
-
-
MININGUSERALIAS
alias
-
Valid for Extract in integrated capture mode for Oracle.
Specifies the alias for the login credentials that Extract uses to log in to a downstream Oracle mining database to interact with the logmining server. Can be used instead of
MININGUSER
if an Oracle GoldenGate credential store is being used.This alias must be:
-
Associated with a database user login credential that is stored in the local Oracle GoldenGate credential store. This user must have the privileges granted in
dbms_goldengate_auth.grant_admin_privilege
. -
The user that issues the
MININGDBLOGIN
orMININGDBLOGINALIAS
andREGISTER EXTRACT
orUNREGISTER EXTRACT
commands for the Extract group that is associated with thisMININGUSERALIAS.
This alias and user must not be changed while Extract is in integrated capture mode.
-
-
MIXEDENDIAN [ON|OFF]
- Valid for DB2
LUW with Oracle GoldenGate primary Extract
Oracle GoldenGate Extract for DB2 LUW supports cross-endian capture where the database and Oracle GoldenGate are running on different byte order servers. Detection of byte order is automatic for DB2 LUW database version 10.5 and higher. If you need to disable auto-detection on DB2 LUW 10.5, then you can override it by specifying this parameter. For version 10.1, the parameter must be used in the Extract parameter file for the cross-endian capture. By default, the value is set to
OFF
for version 10.1.Syntax:
TRANLOGOPTIONS MIXEDENDIAN [ON|OFF]
ON
: If this is set, then the Extract assumes that the database and Oracle GoldenGate are running on servers with a different byte order and necessary byte reversal conversion is performed.OFF
: If this is set, then the Extract assumes that the database and Oracle GoldenGate are running on servers with the same byte order and no byte order reversal conversion is performed. -
MANAGECDCCLEANUP | NOMANAGECDCCLENUP
-
Valid for SQL Server.
MANAGECDCCLEANUP
is the default and recommended setting that instructs the Extract to validate the existence of the Oracle GoldenGate CDC Cleanup job or Purge Change Data task, depending on the architecture and version of Oracle GoldenGate.For all Oracle GoldenGate classic architecture versions, and for microservices versions prior to Oracle GoldenGate 21.4, use the
ogg_cdc_cleanup_setup.bat/sh
program to install the Oracle GoldenGate CDC Cleanup job and associated tables and stored procedures.For Oracle GoldenGate microservices 21.4 and later installations, create a Purge Change Data task from the Tasks page from the Configuration section of the Administration Service of the WebUI. The Purge Change Data task creates the required stored procedures and associated tables, and handles the purge function within the Oracle GoldenGate and not through the SQL Server Agent job.
The
NOMANAGECDCCLEANUP
option instructs Extract not to check for the existence of the Oracle GoldenGate CDC Cleanup job or Purge Change Data task. This is not a recommended option for production environments but can be used for testing an Extract without having to create the Oracle GoldenGate CDC Cleanup job or task.
-
PERFORMANCEPROFILE HIGH|MEDIUM|LOW_RES
- Valid for Extract in Integrated Capture mode.
-
QUERYTIMEOUT seconds
-
Valid for SQL Server.
Specifies how long queries to SQL Server will wait for results before reporting a timeout error message. This option takes an integer value to represent the number of seconds. The default query timeout value is 300 seconds (5 minutes). The minimum value is 0 seconds (infinite timeout). The maximum is 2147483645 seconds.
The following example instructs SQL Server to wait 60 seconds for results before timing out.
TRANLOGOPTIONS QUERYTIMEOUT 60
-
QUERYRETRYCOUNT
seconds
-
Valid for Extract for SQL Server and MySQL.
Specifies how many times to retry calls to the CDC stored procedure used by Extract, in case of a result set timeout.
QUERYRETRYCOUNT
can be specified to retry multiple times. If all of the retry attempts fail, Extract abends with the normal connection timeout error message.For SQL Server, the default is one retry attempt, after which the process abends.The minimum setting (0) is infinite, maximum is 1000, and default is 1.
For MySQL, the minimum and default setting is 50 and maximum is 1000. There is no infinite value. Any attempt to set the
QUERYRETRYCOUNT
to less than minimum, will be ignored with no error or warning.The following example causes Extract to attempt its CDC stored procedure call 4 times:
TRANLOGOPTIONS QUERYRETRYCOUNT 4
The following example causes Extract to attempt its CDC stored procedure call 100 times:TRANLOGOPTIONS QUERYRETRYCOUNT 100
-
READQUEUESIZE
size
-
Valid for MySQL.
Specifies the internal queue size, in bytes, for transaction data. It can be increased to improve performance. Valid values are integers from 3 through 1500. The default is 256 bytes; start with the default and evaluate performance before adjusting upward.
-
REDO_TRANSPORT_LAG_THRESHOLD seconds
- Valid for
Integrated Extract in Downstream Mining Mode.
Monitors the network latency between a source database and target database when redo logs are shipped. If the latency exceeds the specified threshold then a warning appears in the report file and a subsequent information message appears when the lag drops to the normal level.
The default threshold value is 30 seconds. The minimum threshold value that can be specified is 15 seconds.
For more information, see Configuring Redo Transport from Source to Downstream Mining Database.
-
REDO_TRANSPORT_LAG_TIMEOUT value
- Valid for Integrated Extract in Downstream Mining Mode.
The value provided as input in this parameter option is the time period for which Extract will wait for redo from each thread. If all the threads have waited for the timeout (in seconds) and have not received any redo then Extract will abend.
-
REQUIRELONGDATACAPTURECHANGES | NOREQUIRELONGDATACAPTURECHANGES
-
Valid for DB2 LUW.
Controls the response of Extract when
DATA CAPTURE
is set toNONE
or toCHANGES
withoutINCLUDE LONGVAR COLUMNS
and the parameter file includes any of the following Oracle GoldenGate parameters that require the presence of before images for some or all column values:GETBEFOREUPATES
,NOCOMPRESSUPDATES
, andNOCOMPRESSDELETES
. Both of thoseDATA CAPTURE
settings prevent the logging of before values forLONGVAR
columns. If those columns are not available to Extract, it can affect the integrity of the target data. -
SOURCE_OS_TIMEZONE timezone
-
Valid for Extract in integrated capture mode for Oracle.
Specifies the system time zone of the source database. The system time zone of a database is usually given by the default time zone of its operating system, and can also be overridden by setting the
TZ
environment variable when the database is started. You should specify this option only if the source database and the Extract process use different system time zones. For example, in a downstream capture deployment where the source database and the Extract process run on different servers in different time zones.You can specify the value of this option in a time zone region name or a UTC offset form and you must use the same form used by the source database. For example, if the source database uses a region name form like America/New_York, then you must specify America/New_York, US/Eastern, or EST5EDT. Alternately, if the source database uses a UTC offset form like -05:00, then you must use the syntax
(GMT)[+|-]hh[:mm]
. For example, GMT-05:00 or -5. -
SKIPUNKNOWNEVENT
-
Valid for MySQL.
You can use this parameter in the Extract parameter file to enable skipping any unhandled or unknown event in the MySQL binary log. If this parameter is specified, then the Oracle GoldenGate for MySQL Extract continues processing without any error on finding an event that is not handled by the current Extract process.
-
SUPPRESSNOOOPUPDATES
-
Valid for Extract on Oracle Database 12c (12.2) and later.
You can control whether no-op updates are filtered or not in Integrated Extract. The default is no suppression.
-
TRACKSCHEMACHANGES
- Valid for Db2 z/OS and MySQL
-
TRANCOUNT
- Valid for SQL Server.
Allows adjustment of the number of transactions processed per call by Extract to pull data from the SQL Server change data capture staging tables. Based on your transaction workload, adjusting this value may improve capture rate throughput. The minimum value is 1, maximum is 100, and the default is 10.
Example:
TRANLOGOPTIONS TRANCOUNT 20
This example instructs Extract to fetch 20 transactions at a time from change data capture enabled tables.
-
[TSLOOKUPBEGINLRI | TSLOOKUPENDLRI]
-
Valid for DB2 LUW v 10.1 and later.
When you specify an LRI range using these parameters, Extract looks for the timestamp specified in the
ADD
orALTER EXTRACT
command within this range. This helps Extract to optimize the look up process for a particular timestamp in the database transaction log. TheTSLOOKUPBEGINLRI
parameter is mandatory whileTSLOOKUPENDLRI
is optional. Specifying onlyTSLOOKUPENDLRI
withoutTSLOOKUPBEGINLRI
is invalid. For example:TRANLOGOPTIONS TSLOOKUPBEGINLRI 75200.666197, TSLOOKUPENDLRI 75207.666216 TRANLOGOPTIONS TSLOOKUPBEGINLRI 75200.666197
If the provided timestamp falls between the given LRI ranges or the provided timestamp falls after the
TSLOOKUPBEGINLRI
LRI timestamp then Extract starts from a record with timestamp equal to or nearest less than the provided timestamp.If the provided timestamp falls before
TSLOOKUPBEGINLRI
LRI timestamp, Extract is started from the specifiedTSLOOKUPBEGINLRI
LRI. If the provided timestamp falls afterTSLOOKUPENDLRI
timestamp, then Extract abends. If you only specifyTSLOOKUPENDLRI
, then an informational message is displayed and Extract starts from a record with timestamp equal or nearest less than the provided timestamp. -
USENATIVEOBJSUPPORT
| NOUSENATIVEOBJSUPPORT
-
Valid for Extract in integrated capture mode for Oracle.
Integrated Capture adds redo-based capture for User Defined Type (UDT) and
ANYDATA
data types. It is enabled by default and can only be enabled if the source database version is 12.1.0.1 or greater and the source database compatibility is 12.0.0.0.0 or greater. Replicat from Oracle GoldenGate release 12.1.2.1.0 must be used. To use Native Support, all of your Oracle databases and Oracle GoldenGate instances must be release 12.1.0.1 or greater to be compatible.If redo-based capture is enabled but a UDT contains an unsupported attribute, Integrated Capture retries to capture the UDT using fetch. For limitations of support for capture, see XML Data Types in Using Oracle GoldenGate for Oracle Database. If you create object tables by using a
CREATE TABLE AS SELECT
(CTAS) statement, Integrated Capture must be configured to capture DML from CTAS operation in order to fully support object tables. For CTAS use information, see How Oracle GoldenGate Handles Derived Object Names in Using Oracle GoldenGate for Oracle DatabaseThe default is
USENATIVEOBJSUPPORT
if supported. -
USE_ROOT_CONTAINER_TIMEZONE
-
Valid for Oracle integrated Extract only.
This parameter is for a CDB environment. Each PDB in a CDB can use a different database time zone. If the database time zone is available, Extract tries to get the time zone of a PDB from Integrated Dictionary. The time zone extraction requires a patch on the mining database. If the patch is not available, Extract sends a query to the PDB to get the time zone. If the database patch or a connection to the PDB is not available, and this parameter is specified, Extract assumes that the PDB database time zone is the same as the root container database time zone.
-
VERSIONCHECK DYNAMIC | IMMEDIATE
-
This is valid for SQL Server.
Use this option when when you want Extract to validate CDC object versions of common stored procedures, such as OracleCDCExtract and OracleGGCreateProcs, at startup.
DYNAMIC
(default) identifies the CDC object versions of table specified in the parameter file once per table while records are processed.IMMEDIATE
identifies CDC object version issues upfront, instead of while records are processed. Databases with large numbers of tables configured for capture require longer startup validation.