7 Configure
- Configuring Oracle GoldenGate for Distributed Applications and Analytics
- Logging
Logging is essential to troubleshooting Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) integrations with GG for DAA targets. - Configuring Logging
7.1 Configuring Oracle GoldenGate for Distributed Applications and Analytics
This topic describes how to configure GG for DAA handlers.
- Running with Replicat
You need to review before configuring a replicat process in Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA). - About Schema Evolution and Metadata Change Events
- About Configuration Property CDATA[] Wrapping
- Using Regular Expression Search and Replace
You can perform more powerful search and replace operations of both schema data (catalog names, schema names, table names, and column names) and column value data, which are separately configured. Regular expressions (regex
) are characters that customize a search string through pattern matching. - Scaling Oracle GoldenGate for Distributed Applications and Analytics Delivery
- Coordinated Apply Support
- Configuring Cluster High Availability
Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) doesn't have built-in high availability functionality. You need to use a standard cluster software's high availability capability to provide the high availability functionality. - Using Identities in Oracle GoldenGate Credential Store
The Oracle GoldenGate credential store manages user IDs and their encrypted passwords (together known as credentials) that are used by Oracle GoldenGate processes to interact with the local database. The credential store eliminates the need to specify user names and clear-text passwords in the Oracle GoldenGate parameter files.
Parent topic: Configure
7.1.1 Running with Replicat
You need to review before configuring a replicat process in Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA).
This topic explains how to run the Java Adapter with the Oracle GoldenGate Replicat process.
7.1.1.1 Replicat Grouping
The Replicat process provides the Replicat configuration property, GROUPTRANSOPS
, to control transaction grouping. By default, the Replicat process implements transaction grouping of 1000 source transactions into a single target transaction. If you want to turn off transaction grouping then the GROUPTRANSOPS
Replicat property should be set to 1
.
Parent topic: Running with Replicat
7.1.1.2 About Replicat Checkpointing
In addition to the Replicat checkpoint file ,.cpr
, an additional checkpoint file, dirchk/group.cpj
, is created that contains information similar to CHECKPOINTTABLE
in Replicat for the database.
Parent topic: Running with Replicat
7.1.1.3 About Initial Load Support
Replicat can already read trail files that come from both the online capture and
initial load processes that write to a set of trail files. In addition, Replicat can
also be configured to support the delivery of the special run initial load process
using RMTTASK
specification in the Extract parameter file. For more
details about configuring the direct load, see Initial Load Extract.
Note:
The SOURCEDB
or DBLOGIN
parameter specifications vary depending on your source database.
Parent topic: Running with Replicat
7.1.1.4 About the Unsupported Replicat Features
The following Replicat features are not supported in this release:
-
BATCHSQL
-
SQLEXEC
-
Stored procedure
-
Conflict resolution and detection (CDR)
Parent topic: Running with Replicat
7.1.1.5 How the Mapping Functionality Works
The Oracle GoldenGate Replicat process supports mapping functionality to custom target schemas. You must use the Metadata Provider functionality to define a target schema or schemas, and then use the standard Replicat mapping syntax in the Replicat configuration file to define the mapping.
Parent topic: Running with Replicat
7.1.2 About Schema Evolution and Metadata Change Events
The Metadata in trail is a feature that allows seamless runtime handling of metadata
change events by Oracle GoldenGate for Distributed
Applications and Analytics (GG for DAA), including
schema evolution and schema propagation to GG for
DAA target applications. The
NO_OBJECTDEFS
is a sub-parameter
of the Extract and Replicat
EXTTRAIL
and
RMTTRAIL
parameters that lets you
suppress the important metadata in trail feature
and revert to using a static metadata
definition.
The Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) Handlers and Formatters provide functionality to take action when a metadata change event is encountered. The ability to take action in the case of metadata change events depends on the metadata change events being available in the source trail file. Oracle GoldenGate supports metadata in trail and the propagation of DDL data from a source Oracle Database. If the source trail file does not have metadata in trail and DDL data (metadata change events) then it is not possible for GG for DAA to provide and metadata change event handling.
7.1.3 About Configuration Property CDATA[] Wrapping
The Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA)
Handlers and Formatters support the configuration of many parameters in the Java
properties file, the value of which may be interpreted as white space. The
configuration handling of the Java Adapter trims white space from configuration
values from the Java configuration file. This behavior of trimming whitespace may be
desirable for some configuration values and undesirable for other configuration
values. Alternatively, you can wrap white space values inside of special syntax to
preserve the whites pace for selected configuration variables. GG for DAA borrows
the XML syntax of CDATA[]
to preserve white space. Values that
would be considered to be white space can be wrapped inside of
CDATA[]
.
The following is an example attempting to set a new-line delimiter for the Delimited Text Formatter:
gg.handler.{name}.format.lineDelimiter=\n
This configuration will not be successful. The new-line character is interpreted as white space and will be trimmed from the configuration value. Therefore the gg.handler
setting effectively results in the line delimiter being set to an empty string.
In order to preserve the configuration of the new-line character simply wrap the character in the CDATA[]
wrapper as follows:
gg.handler.{name}.format.lineDelimiter=CDATA[\n]
Configuring the property with the CDATA[]
wrapping preserves the white space and the line delimiter will then be a new-line character.
7.1.4 Using Regular Expression Search and Replace
You can perform more powerful search and replace operations of both schema data
(catalog names, schema names, table names, and column names) and column value data, which
are separately configured. Regular expressions (regex
) are characters that
customize a search string through pattern matching.
You can match a string against a pattern or extract parts of the match. Oracle
GoldenGate for Distributed Applications and Analytics (GG for DAA) uses the standard
Oracle Java regular expressions package, java.util.regex
, see Regular
Expressions in The Single UNIX Specification, Version 4.
7.1.4.1 Using Schema Data Replace
You can replace schema data using the gg.schemareplaceregex
and gg.schemareplacestring
properties. Use gg.schemareplaceregex
to set a regular expression, and then use it to search catalog names, schema names, table names, and column names for corresponding matches. Matches are then replaced with the content of the gg.schemareplacestring
value. The default value of gg.schemareplacestring
is an empty string or ""
.
For example, some system table names start with a dollar sign like
$mytable
. You may want to replicate these tables even though most
technologies do not allow dollar signs in table names. To remove the dollar sign, you
could configure the following replace strings:
gg.schemareplaceregex=[$]
gg.schemareplacestring=
The resulting example of searched and replaced table name is mytable
. These properties also support CDATA[]
wrapping to preserve whitespace in the value of configuration values. So the equivalent of the preceding example using CDATA[]
wrapping use is:
gg.schemareplaceregex=CDATA[[$]]
gg.schemareplacestring=CDATA[]
The schema search and replace functionality supports using multiple search regular expressions and replacements strings using the following configuration syntax:
gg.schemareplaceregex=some_regex
gg.schemareplacestring=some_value
gg.schemareplaceregex1=some_regex
gg.schemareplacestring1=some_value
gg.schemareplaceregex2=some_regex
gg.schemareplacestring2=some_value
Parent topic: Using Regular Expression Search and Replace
7.1.4.2 Using Content Data Replace
You can replace content data using the gg.contentreplaceregex
and gg.contentreplacestring
properties to search the column values using the configured regular expression and replace matches with the replacement string. For example, this is useful to replace line feed characters in column values. If the delimited text formatter is used then line feeds occurring in the data will be incorrectly interpreted as line delimiters by analytic tools.
You can configure n number of content replacement regex search values. The regex search and replacements are done in the order of configuration. Configured values must follow a given order as follows:
gg.contentreplaceregex=some_regex
gg.contentreplacestring=some_value
gg.contentreplaceregex1=some_regex
gg.contentreplacestring1=some_value
gg.contentreplaceregex2=some_regex
gg.contentreplacestring2=some_value
Configuring a subscript of 3 without a subscript of 2 would cause the subscript 3 configuration to be ignored.
Attention:
Regular express searches and replacements require computer processing and can reduce the performance of the Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) process.
To replace line feeds with a blank character you could use the following property configurations:
gg.contentreplaceregex=[\n]
gg.contentreplacestring=CDATA[ ]
This changes the column value from:
this is
me
to :
this is me
Both values support CDATA
wrapping. The second value must be wrapped
in a CDATA[]
wrapper because a
single blank space will be interpreted as whitespace
and trimmed by the GG for DAA configuration layer.
In addition, you can configure multiple search a
replace strings. For example, you may also want to
trim leading and trailing white space out of column
values in addition to trimming line feeds from:
^\\s+|\\s+$
gg.contentreplaceregex1=^\\s+|\\s+$
gg.contentreplacestring1=CDATA[]
Parent topic: Using Regular Expression Search and Replace
7.1.5 Scaling Oracle GoldenGate for Distributed Applications and Analytics Delivery
Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) supports breaking down the source trail files into either multiple Replicat processes or by using Coordinated Delivery to instantiate multiple Java Adapter instances inside a single Replicat process to improve throughput. This allows you to scale GG for DAA delivery.
There are some cases where the throughput to GG for DAA integration targets is not sufficient to meet your service level agreements even after you have tuned your handler for maximum performance. When this occurs, you can configure parallel processing and delivery to your targets using one of the following methods:
-
Multiple Replicat processes can be configured to read data from the same source trail files. Each of these Replicat processes are configured to process a subset of the data in the source trail files so that all of the processes collectively process the source trail files in their entirety. There is no coordination between the separate Replicat processes using this solution.
-
Oracle GoldenGate Coordinated Delivery can be used to parallelize processing the data from the source trail files within a single Replicat process. This solution involves breaking the trail files down into logical subsets for which each configured subset is processed by a different delivery thread. For more information abour Co-ordinated Replicat, see About Coordinated Replicat in the Oracle GoldenGate Microservices Architecture Documentation.
With either method, you can split the data into parallel processing for improved throughput. Oracle recommends breaking the data down in one of the following two ways:
-
Splitting Source Data By Source Table –Data is divided into subsections by source table. For example, Replicat process 1 might handle source tables table1 and table 2, while Replicat process 2 might handle data for source tables table3 and table2. Data is split for source table and the individual table data is not subdivided.
-
Splitting Source Table Data into Sub Streams – Data from source tables is split. For example, Replicat process 1 might handle half of the range of data from source table1, while Replicat process 2 might handler the other half of the data from source table1.
- If you are using Coordinated Replicat, please make sure that you add
TARGETDB LIBFILE libggjava.so SET property=path_to_deployment_home/etc/conf/ogg/your_replicat_name.properties
.
Additional limitations:
-
Parallel apply is not supported.
-
The
BATCHSQL
parameter not supported.
Example 7-1 Scaling Support for the Oracle GoldenGate for Distributed Applications and Analytics Handlers
Handler Name | Splitting Source Data By Source Table | Splitting Source Table Data into Sub Streams |
---|---|---|
Cassandra |
Supported |
Supported when:
|
Elastic Search |
Supported |
Supported |
HBase |
Supported when all required HBase namespaces are pre-created in HBase. |
Supported when:
|
HDFS |
Supported |
Supported with some restrictions.
|
JDBC |
Supported |
Supported |
Kafka |
Supported |
Supported for formats that support schema propagation, such as Avro. This is less desirable due to multiple instances feeding the same schema information to the target. |
Kafka Connect |
Supported |
Supported |
Kinesis Streams |
Supported |
Supported |
MongoDB |
Supported |
Supported |
Java File Writer | Supported | Supported with the following restrictions:
You must select a naming convention for generated
files where the file names do not collide. Colliding file names
may results in a Replicat abend and/or polluted data. When using
coordinated apply it is suggested that you configure
|
7.1.6 Coordinated Apply Support
Support Matrix for Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) Targets
Feature index
- Option1 - Discreet threads per table - Initial load
- Option2 - Shared threads per table - Initial load
- Option3 - Discreet threads per table - CDC
- Option4 - Shared threads per table - CDC
Target | Option1 | Option2 | Option3 | Option4 |
---|---|---|---|---|
Snowflake stage/merge | [✓] | [✓] | [✓] | *SNOW [✓]
|
BigQuery stage/merge | [✓] | [✓] | [✓] | *BQ [✓]
|
Databricks | [✓] | *DBX [✓]
|
[✓] | *DBX [✓]
|
ADW | [✓] | [✓] | [✓] | [✓] |
Synapse | [✓] | [✓] | [✓] | *SYN [✓]
|
Redshift | [✓] | *RS [✓]
|
[✓] | *RS [✓]
|
OneLake | [✓] | [x] | [✓] | [x] |
Iceberg | [✓] | *ICE [✓]
|
[✓] | *ICE [✓]
|
Snowflake streaming | [✓] | *TC [✓]
|
[✓] | *TC [✓]
|
BigQuery streaming | [✓] | *TC [✓]
|
[✓] | *TC [✓]
|
Cassandra | [✓] | *TC [✓]
|
[✓] | *TC [✓]
|
HBase | [✓] | *HBASE [✓]
|
[✓] | *HBASE [✓]
|
Elasticsearch | [✓] | [✓] | [✓] | [✓] |
Kafka | [✓] | [✓] | [✓] | [✓] |
Oracle Streaming | [✓] | [✓] | [✓] | [✓] |
Kafka Connect | [✓] | [✓] | [✓] | [✓] |
MongoDB | [✓] | [✓] | [✓] | [✓] |
AJD | [✓] | [✓] | [✓] | [✓] |
CosmosDB | [✓] | [✓] | [✓] | [✓] |
Kinesis | [✓] | [✓] | [✓] | [✓] |
Kafka Rest Proxy | [✓] | [✓] | [✓] | [✓] |
Oracle NoSQL | [✓] | *TC [✓]
|
[✓] | *TC [✓]
|
Redis | [✓] | *REDIS [✓]
|
[✓] | *REDIS [✓]
|
Google Pub/Sub | [✓] | [✓] | [✓] | [✓] |
File Writer | [✓] | *FW [✓]
|
[✓] | *FW [✓]
|
Hadoop (HDFS) | [✓] | *FW [✓]
|
[✓] | *FW [✓]
|
JDBC | [✓] | [✓] | [✓] | [✓] |
*SNOW
- Option4 is supported for Snowflake but not recommended.
MERGE
statement holds a table lock until the transaction is committed. The locks times may hinder the performance of the replicat.
*BQ
- Option4 is supported for BigQuery but not recommended.
- At times, replicat can ABEND with Query error: Could not serialize access to table due to concurrent update
- BigQuery jobs are retried, by default up to three times. We have a few
configuration parameters related to this. For more information, see
gg.eventhandler.bq.retries
andgg.eventhandler.bq.totalTimeout
in BigQuery Event Handler Configuration.
*DBX
- Option4 is supported for Databricks but not recommended.
- If the
INSERT
orMERGE
DML modifies the same partition in a table (or if the table is unpartitioned), it could lead toConcurrentAppendException
. Seehttps://docs.databricks.com/en/optimizations/isolation-level.html#concurrentappendexception
*SYN
- Option 4 is supported. For Synapse, the isolation level of the transactional support
is default to
READ UNCOMMITTED
See: https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql?view=sql-server-ver16#arguments - If you modifiy the default isolation level, then there could be a performance impact
and errors when using the
MERGE
statement.
*RS
- Option 2 and Option 4 are supported for Redshift but provided the Redshift
database's isolation level is set to
SNAPSHOT
. - To enable co-ordinated apply for Redshift, ensure that the Redshift database's
isolation level is set to
SNAPSHOT
. - The Redshift
SNAPSHOT ISOLATION
option allows higher concurrency, where concurrent modifications to different rows in the same table can complete successfully. SQL:ALTER DATABASE <sampledb> ISOLATION LEVEL SNAPSHOT;
*ICE
- Option 2 and Option 4 are supported for most Iceberg catalogs except the Hadoop catalog.
*TC
- Not recommended if auto table creation and/or altering is enabled. Threads of execution may conflict during table creation and/or altering.
*HBASE
- Precreate namespaces and target tables in HBase.
*REDIS
- Disable automatic index creation in the configuration otherwise threads of execution may conflict with each other when creating indexes.
*FW
- Configure a file naming template which ensures that file names are unique across
threads of execution. A good strategy is to use
${groupname}
in the template.
7.1.7 Configuring Cluster High Availability
Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) doesn't have built-in high availability functionality. You need to use a standard cluster software's high availability capability to provide the high availability functionality.
You can configure a high availability scenario on a cluster so that if the leader instance of (GG for DAA) on machine fails, another GG for DAA instance could be started on another machine to resume where the failed instance left off.
If you manually configure your instances to share common GG for DAA and Oracle GoldenGate files using a shared disk architecture you can create a fail over situation. For a cluster installation, these files would need to accessible from all machines and accessible in the same location.
The configuration files that must be shared are:
-
replicat.prm
-
Handler properties file.
-
Additional properties files required by the specific adapter. This depends on the target handler in use. For example, Kafka would be a producer properties file.
-
Additional schema files you've generated. For example, Avro schema files generated in the
dirdef
directory. -
File Writer Handler generated files on your local file system at a configured path. Also, the File Writer Handler state file in the
dirsta
directory. -
Any
log4j.properties
orlogback.properties
files in use.
Checkpoint files must be shared for the ability to resume processing:
-
Your Replicat checkpoint file (
*.cpr
). -
Your adapter checkpoint file (
*.cpj
).
7.1.8 Using Identities in Oracle GoldenGate Credential Store
The Oracle GoldenGate credential store manages user IDs and their encrypted passwords (together known as credentials) that are used by Oracle GoldenGate processes to interact with the local database. The credential store eliminates the need to specify user names and clear-text passwords in the Oracle GoldenGate parameter files.
An optional alias can be used in the parameter file instead of the user ID to map to a userid and password pair in the credential store. The credential store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF). The use of an LDAP directory is not supported for the Oracle GoldenGate credential store. The auto login wallet supports automated restarts of Oracle GoldenGate processes without requiring human intervention to supply the necessary passwords.
In Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA), you specify the alias and domain in the property file not the actual user ID or password. User credentials are maintained in secure wallet storage.
7.1.8.1 Creating a Credential Store
You can create a credential store for your Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) environment.
Run the GGSCI ADD CREDENTIALSTORE
command to create a file called cwallet.sso
in the dircrd/
subdirectory of your Oracle GoldenGate installation directory (the default).
You can the location of the credential store (cwallet.sso
file by specifying the desired location with the CREDENTIALSTORELOCATION
parameter in the GLOBALS
file.
Note:
Only one credential store can be used for each Oracle GoldenGate instance.
Parent topic: Using Identities in Oracle GoldenGate Credential Store
7.1.8.2 Adding Users to a Credential Store
After you create a credential store for your Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) environment, you can added users to the store.
Run the GGSCI ALTER CREDENTIALSTORE ADD USER userid PASSWORD password [ALIAS alias] [DOMAIN domain]
command to create each user, where:
-
userid
is the user name. Only one instance of a user name can exist in the credential store unless theALIAS
orDOMAIN
option is used. -
password
is the user's password. The password is echoed (not obfuscated) when this option is used. If this option is omitted, the command prompts for the password, which is obfuscated as it is typed (recommended because it is more secure). -
alias
is an alias for the user name. The alias substitutes for the credential in parameters and commands where a login credential is required. If theALIAS
option is omitted, the alias defaults to the user name.
For example:
ALTER CREDENTIALSTORE ADD USER scott PASSWORD tiger ALIAS scsm2 domain ggadapters
Parent topic: Using Identities in Oracle GoldenGate Credential Store
7.1.8.3 Configuring Properties to Access the Credential Store
The Oracle GoldenGate Java Adapter properties file requires specific syntax to resolve user name and password entries in the Credential Store at runtime. For resolving a user name the syntax is the following:
ORACLEWALLETUSERNAME[alias domain_name]
For resolving a password the syntax required is the following:
ORACLEWALLETPASSWORD[alias domain_name]
The following example illustrate how to configure a Credential Store entry with an alias of myalias
and a domain of mydomain
.
Note:
With HDFS Hive JDBC the user name and password is encrypted.Oracle Wallet integration only works for configuration properties which contain the string username or password. For example:
gg.handler.hdfs.hiveJdbcUsername=ORACLEWALLETUSERNAME[myalias mydomain]
gg.handler.hdfs.hiveJdbcPassword=ORACLEWALLETPASSWORD[myalias mydomain]
ORACLEWALLETUSERNAME
and ORACLEWALLETPASSWORD
can be
used in the Extract (similar to Replicat) in JMS handler as well. For example:
gg.handler.<name>.user=ORACLEWALLETUSERNAME[JMS_USR JMS_PWD] gg.handler.<name>.password=ORACLEWALLETPASSWORD[JMS_USR JMS_PWD]
Consider the user name and password entries as accessible values in the Credential Store. Any configuration property resolved in the Java Adapter layer (not accessed in the C user exit layer) can be resolved from the Credential Store. This allows you more flexibility to be creative in how you protect sensitive configuration entries.
Parent topic: Using Identities in Oracle GoldenGate Credential Store
7.2 Logging
Logging is essential to troubleshooting Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) integrations with GG for DAA targets.
This topic details how GG for DAA integration log and the best practices for logging.
Parent topic: Configure
7.2.1 About Replicat Process Logging
Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) integrations leverage the Java Delivery functionality. In this setup, either a Oracle GoldenGate Replicat process loads a user exit shared library. This shared library then loads a Java virtual machine to thereby interface with targets providing a Java interface. So the flow of data is as follows:
Replicat Process —>User Exit—> Java Layer
It is important that all layers log correctly so that users can review the logs to troubleshoot new installations and integrations. Additionally, if you have a problem that requires contacting Oracle Support, the log files are a key piece of information to be provided to Oracle Support so that the problem can be efficiently resolved.
A running Replicat process creates or appends log files into the
GoldenGate_Home/dirrpt
directory that adheres to the following naming
convention:
process_name.rpt.
If a
problem is encountered when deploying a new Oracle
GoldenGate process, this is likely the first log
file to examine for problems. The Java layer is
critical for integrations with GG for DAA
applications.
Parent topic: Logging
7.2.2 About Java Layer Logging
The Oracle GoldenGate for Distributed Applications and Analytics (GG for DAA) product provides flexibility for logging from the Java layer. The recommended best practice is to use Log4j logging to log from the Java layer. Enabling simple Log4j logging requires the setting of two configuration values in the Java Adapters configuration file.
gg.log=log4j gg.log.level=INFO
These gg.log
settings will result in a Log4j file to be created in
the
GoldenGate_Home
/dirrpt
directory that adheres to this naming convention,
{GROUPNAME}.log
. The
supported Log4j log levels are in the following
list in order of increasing logging
granularity.
-
OFF
-
FATAL
-
ERROR
-
WARN
-
INFO
-
DEBUG
-
TRACE
Selection of a logging level will include all of the coarser logging levels as well (that is, selection of WARN
means that log messages of FATAL
, ERROR
and WARN
will be written to the log file). The Log4j logging can additionally be controlled by separate Log4j properties files. These separate Log4j properties files can be enabled by editing the bootoptions
property in the Java Adapter Properties file. These three example Log4j properties files are included with the installation and are included in the classpath:
log4j-default.properties log4j-debug.properites log4j-trace.properties
You can modify the bootoptions
in any of the files as follows:
javawriter.bootoptions=-Xmx512m -Xms64m
-Djava.class.path=.:ggjava/ggjava.jar
-Dlog4j.configurationFile=samplelog4j.properties
You can use your own customized Log4j properties file to control logging. The customized Log4j properties file must be available in the Java classpath so that it can be located and loaded by the JVM. The contents of a sample custom Log4j properties file is the following:
# Root logger option log4j.rootLogger=INFO, file # Direct log messages to a log file log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.File=sample.log log4j.appender.file.MaxFileSize=1GB log4j.appender.file.MaxBackupIndex=10 log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n There are two important requirements when you use a custom Log4j properties file. First, the path to the custom Log4j properties file must be included in thejavawriter.bootoptions
property. Logging initializes immediately when the JVM is initialized while the contents of thegg.classpath
property is actually appended to theclassloader
after the logging is initialized. Second, theclasspath
to correctly load a properties file must be the directory containing the properties file without wildcards appended.
Parent topic: Logging
7.2.3 About SQL Statement Logging
Oracle GoldenGate for Distributed Applications and Analytics now includes the ability to log SQL statements for stage and merge targets into a file. This feature is designed to assist with debugging by providing insight into the execution of SQL statements processed by Replicat.
By default, SQL logging is set to the INFO
level. The
generated log files are stored in the OGG_DIRLOG
directory. Each SQL
log entry includes a timestamp to make it easier to trace the execution sequence. Note
that SHOWSYNTAX
is not supported for coordinated Replicat groups.
This functionality is intended specifically for debugging purposes. It is recommended to use it when you need to verify the SQL syntax generated by Replicat as part of troubleshooting or validation.
Parent topic: Logging
7.2.3.1 Configuring SQL Statement Logging
Parameter file
SHOWSYNTAX _NOCHECKTTY
Property file
gg.log.sql.filename
(Optional)gg.log.sql.level=info
(Optional)
Note:
Including this, the configuration in the properties file is optional. If not specified, Replicat will use the default settings, with the log file named after the Replicat process name.SHOWSYNTAX
parameter,
see SHOWSYNTAX in the Reference
for Oracle GoldenGate.
Parent topic: About SQL Statement Logging
7.3 Configuring Logging
7.3.1 Oracle GoldenGate Java Adapter Default Logging
7.3.1.1 Default Logging Setup
Logging is enabled by default for the Oracle GoldenGate for BigData. The logging
implementation is log4j
. By
default, logging is enabled at the
info
level.
Parent topic: Oracle GoldenGate Java Adapter Default Logging
7.3.1.2 Log File Name
The log output file is created in the standard report directory. The name of the log file includes the replicat group name and has an extension of log.
If the Oracle GoldenGate Replicat process group name is
JAVAUE
, then the log file name in the report directory is:
JAVAUE.log
.
Parent topic: Oracle GoldenGate Java Adapter Default Logging
7.3.1.3 Changing Logging Level
log4j
logging level, add the configuration
shown in the following example to the Java Adapter Properties file:
gg.log.level=error
You can set the gg.log.level
to none
,
error
, warn
, info
,
debug
, or trace
. The default log level is
info
. Oracle recommends the debug
and
trace
log levels only for troubleshooting as these settings can
adversely impact the performance.
Parent topic: Oracle GoldenGate Java Adapter Default Logging
7.3.2 Recommended Logging Settings
Oracle recommends that you use log4j logging instead of the JDK default for unified logging for the Java user exit. Using log4j provides unified logging for the Java module when running with the Oracle GoldenGate Replicat process.
Parent topic: Configuring Logging
7.3.2.1 Changing to the Recommended Logging Type
To change the recommended log4j logging implementation, add the configuration shown in the following example to the Java Adapter Properties file.
gg.log=log4j gg.log.level=info
The gg.log
level can be set to none
, error
, warn
, info
, debug
, or trace
. The default log level is info
. The debug and trace log levels are only recommended for troubleshooting as these settings can adversely affect performance.
The result is that a log file for the Java module will be created in the dirrpt
directory with the following naming convention:
<process name>_<log level>
log4j.log
Therefore if the Oracle GoldenGate Replicat process is called javaue
, and the gg.log.level
is set to debug, the resulting log file name is:
javaue_debug_log4j.log
Parent topic: Recommended Logging Settings