PostgreSQL: Supported Data Types, Objects, and Operations

Oracle GoldenGate for PostgreSQL supports capture and delivery of initial load and transactional data for supported PostgreSQL database versions.

Oracle GoldenGate for PostgreSQL supports the mapping, filtering, and transformation of source data, unless noted otherwise in this document, as well as replicating data derived from other source databases supported by Oracle GoldenGate, into PostgreSQL databases.

Supported Databases

The following are supported databases and limitations for Oracle GoldenGate for PostgreSQL:
  • Only user databases are supported for capture and delivery.

  • Oracle GoldenGate does not support capture from archived logs.

  • Delivery is not supported against replica, standby databases.

  • Capture is also supported from replica, standby databases.

  • High Availability. See High Availability Considerations for details.

High Availability Considerations

Oracle GoldenGate supports capturing change data from PostgreSQL High Availability configuration. This functionality is available only on PostgreSQL database versions 16 and higher. The change data capture can be configured on the primary server or any read-only standby server in the High Availability setup. To avoid possible data-loss and manual intervention in cases of failover, Oracle GoldenGate recommends that the High Availability setup should be configured using synchronous replication.

Refer to Oracle GoldenGate procedures for PostgreSQL High Availability Failover Doc ID 2818379.1 for more details on possible data loss and manual intervention scenarios.

To configure the CDC Extract from a PostgreSQL High Availability setup, perform the following tasks:
  1. Extract can be configured to capture data from the primary server or a read-only standby server in the High Availability setup.

  2. When the Extract is configured on one node (primary or standby), the replication slot with the same name should be created on all the other nodes in the High Availability setup at the same time, before starting the data capture.

    1. Use REGISTER EXTRACT command to create the replication slot on one server (where the Extract is configured), as shown in the following example:
      REGISTER EXTRACT exte
    2. Immediately after registering Extract, create the replication slot with the same name. You must use the same replication slot name on all other nodes in the High Availability setup explicitly.
      postgres=# select pg_create_logical_replication_slot('exte_ac6b520cab2871b7', 'test_decoding');
       pg_create_logical_replication_slot 
      ------------------------------------
       (exte_ac6b520cab2871b7,0/8169E00)
      (1	row)
      2025-01-20T06:58:22Z  INFO    OGG-25355  Successfully created replication slot 'exte_ac6b520cab2871b7' for Extract group 'EXTE' in database 'postgres'.
      
    3. Connect to the corresponding database and execute the pg_create_logical_replication_slot statement from PSQL.

      Note:

      IMPORTANT NOTE FOR REPLICATION SLOT CREATION ON STANDBY: When the replication slot creation is attempted on the read-only standby server (either using REGISTER EXTRACT command or pg_create_logical_replication_slot statement), the operation completes only after some DML (can be any random DML) is executed on any table on the primary server in the high availability setup. Alternatively the heartbeat functionality can be enabled on the primary server, which can cause the required DML activity on the primary, to complete the replication slot creation on the standby server.
  3. ADD TRANDATA or DELETE TRANDATA commands are not supported on any read-only standby server(s) in the PostgreSQL high availability setup. The REPLICA IDENTITY setting of any table on the read-only standby server would be the same as the REPLICA IDENTITY setting of the corresponding table on the primary server.

  4. To use the heartbeat functionality when the data capture is configured on a read-only standby server, the heartbeat functionality should be enabled on the primary server.

  5. In the case of failover, the extract parameter file should be modified to connect to the desired node (either primary/standby, based on the requirement) to continue the data capture after the failover.

    Example

    Consider a situation, where there are three nodes in the PostgreSQL high availability cluster, with corresponding connection aliases:

    Node1 (current primary) with corresponding connection alias: node1

    Node2 (standby) with corresponding connection alias: node2

    Node3 (standby) with corresponding connection alias: node3

    Before the failover, if the Extract is configured to capture data from one of the nodes (such as node2), then the Extract parameter file would contain an entry similar to the following:
    USERIDALIAS node2
    In case of a failover, to the primary node (node1) goes down and the secondary node (node2) becomes the new primary then there can be two possibilities:
    1. To continue data capture from same node2:
      • No changes are required in the Extract parameter file.

      • Restart the Extract.

    2. To capture the data always from a different node:

      • Change the Extract parameter file to connect to the desired node, such as Node3.
        USERIDALIAS node3
      • Restart the Extract.

Supported PostgreSQL Data Types

Here's a list of PostgreSQL data types that Oracle GoldenGate supports along with the limitations of this support.

  • array

  • bigint

  • bigserial

  • bit(n)

  • bit varying(n)

  • boolean

  • bytea

  • char (n)

  • cidr

  • citext

  • date

  • decimal

  • double precision

  • Enumerated Types

  • inet

  • integer

  • interval

  • json

  • jsonb

  • macaddr

  • macaddr8

  • money

  • numeric

  • pgvector extension

  • real

  • serial

  • smallint

  • smallserial

  • text

  • time with/without timezone

  • timestamp with/without timezone

  • tsquery

  • tsvector

  • uuid

  • varchar(n)

  • varbit

  • xml

Handling Array Data Type

PostgreSQL supports array of various built-in and UDT types. Starting with Oracle GoldenGate release 23ai, Oracle GoldenGate Extract and Replicat support PostgreSQL array data type of following types:

  • boolean

  • bigint

  • bit

  • char

  • date

  • double precision

  • enum

  • int

  • interval

  • numeric

  • money

  • real

  • smallint

  • timestamp, timestampz, time, timetz

  • varbit

  • varchar

  • text

  • tsvector

  • tsquery

Both initial load and CDC Extract support these array types.

Note:

Initial load Extract does not support PostgreSQL array of time datatype.

Limitations of Support

  • If columns of char, varchar, text, or bytea data types are part of a primary or unique key, then the maximum individual lengths for these columns must not exceed 8191 bytes.

  • Extract cannot process records with bytea columns of more than 512 MB in size.

  • Columns of data type CITEXT that are part of the Primary Key are supported up to 8000 bytes in size. CITEXT columns that are greater than 8000 bytes and are part of the Primary Key are not supported.

  • Extract cannot process records with bytea columns of more than 512 MB in size.

  • real, double, numeric, decimal: NaN input values are not supported.

  • The following limitations apply to bit/varbit data types:

    • They are supported up to 4k in length. For lengths greater than 4k the data is truncated and only the lower 4k bits are captured.

    • The source bit(n) column can be applied only onto a character type column on a non-PostgreSQL target and can be applied onto a char type or a bit/varbit column on PostgreSQL target.

  • The following limitations are applicable to both timestamp with time zone and timestamp without time zone:
    • The timestamp data with BC or AD tags in the data is not supported.

    • The timestamp data older than 1883-11-18 12:00:00 is not supported.

    • The timestamp data with more than 4 digits in the YEAR component is not supported.

    • Infinity/-Infinity input strings for timestamp columns are not supported.

  • The following are the limitations when using interval:

    • The capture of mixed sign interval data from interval type columns is not supported. You can use DBOPTIONS ALLOWNONSTANDARDINTERVALDATA in the Extract parameter file to capture the mixed sign interval data (or any other format of interval data, which is not supported by Oracle GoldenGate) as a string (not as standard interval data).

      The following are a few examples of data that gets written to the trail file, on using the DBOPTIONS ALLOWNONSTANDARDINTERVALDATA in the Extract param file:

    • +1026-9 +0 +0:0:22.000000 is interpreted as 1026 years, 9 months, 0 days, 0 hours, 0 minutes, 22 seconds.

    • -0-0 -0 -8 is interpreted as 0 years, 0 months, 0 days, -8 hours.

    • +1-3 +0 +3:20 is interpreted as 1 year, 3 months, 0 days, 3 hours, 20 minutes.

  • Replicat: If the source interval data was captured using DBOPTIONS ALLOWNONSTANDARDINTERVALDATA and written as a string to the trail, the corresponding source column is allowed to be mapped to either a char or a binary type column on the target.

  • date limitations are:

    • The date data with BC or AD tags in the data is not supported.

    • Infinity/-Infinity input strings for date columns are not supported.

  • Columns of text, json, xml, bytea, char (>8191), varchar (>8191) are treated as LOB columns and have the following limitations:

    • When using GETUPDATEBEFORES, the before image of LOB columns is never logged.

    • When using NOCOMPRESSUPDATES, LOB columns are logged in the after image only if they were modified.

  • The support of range and precision for floating-point numbers depends on the host machine. In general, the precision is accurate to 16 significant digits, but you should review the database documentation to determine the expected approximations. Oracle GoldenGate rounds or truncates values that exceed the supported precision.

Non-Supported PostgreSQL Data Types

Oracle GoldenGate for PostgreSQL does not support the following data types:

Note:

If the Extract parameter file contains a table with unsupported data types, the Extract will stop with an error message. To resume replication, remove the table from the Extract file or remove the column from the table with an unsupported data type.

Supported Objects and Operations for PostgreSQL

  • Oracle GoldenGate for PostgreSQL only supports DML operations (Insert/Update/Deletes). DDL replication is not supported.

  • Oracle GoldenGate for PostgreSQL supports replication of truncate operations beginning with PostgreSQL 11 and above, and requires the GETTRUNCATES parameter in Extract and Replicat.

  • Case-Sensitive/Insensitive names Usage:
    • Unquoted names are case-insensitive and are implicitly lowercase. For example, CREATE TABLE MixedCaseTable and SELECT * FROM mixedcasetable are equivalent.

    • Quoted table and column names are case-sensitive and need to be listed correctly in Extracts and Replicats and with Oracle GoldenGate commands.

      For example, TABLE appschema.”MixedCaseTable” and ADD TRANDATA appschema.”MixedCaseTable” would be required to support a case-sensitive table name.

Tables, Views, and Materialized Views

Tables to be included for capture and delivery must meet the following requirements and must only include data types listed under Supported PostgreSQL Data Types.

  • Oracle GoldenGate for PostgreSQL supports capture of transactional DML from user tables, and delivery to user tables.

  • Oracle GoldenGate for PostgreSQL, versions 21.14.0.0.0 and after, supports capture and delivery to base partitioned tables.

  • Globalization is supported for object names (table /schema/database/column names) and column data.

  • Capture and delivery for a materialized view is supported.

  • Extract process supports materialized views, which must have unique index and it must be refreshed concurrently.

  • The Replica Identity of materialized views must be set to FULL so that CDR, UNIFIED FORMAT RECORD and NOCOMPRESSDELETES, GETUPDATEBEFORES can be supported. Also, the replica identity FULL setting is required for the materialized view to process the WAL DELETE record. If the Replica Identity for the materialized view is not set to FULL then the DELETE record will not be processed by Extract.

    To set the Replica Identity to FULL, use the following command:

    add trandata schema.materialized_view, ALLCOLS
  • The statistics for an UPDATE or PK UPDATE will log as DELETE followed by INSERT. Although this displays an anomaly in the Extract statistics, but the data sync is correct when the source and target materialized views data is compared.

  • The PK UPDATE results in DELETE followed by INSERT so the Extract statistics will show differently than the database row count. You have to rely on the data comparison tools to check the data integrity between source and target data, containing the materialized views records.

  • The data from the trail cannot be applied to materialized view by the Replicat process. You can map the materialized views data from the trail to any other table instead.

Limitations

  • Oracle GoldenGate for PostgreSQL does not support capture and delivery for views.

  • Oracle GoldenGate for PostgreSQL does not support capture from individual partitions of a partitioned table.

Sequences and Identity Columns

  • Sequences are supported on source and target tables for unidirectional, bidirectional, and multi- directional implementations.

  • Identity columns created using the GENERATED BY DEFAULT AS IDENTITY clause, are supported on source and target tables, for unidirectional, bidirectional, and multi- directional implementations.

  • Identity columns created using the GENERATED ALWAYS AS IDENTITY clause, are not supported in target database tables and the Identity property should be removed from target tables or changed to GENERATED BY DEFAULT AS IDENTITY.

  • For bidirectional and multi-directional implementations, define the Identity columns and sequences with an INCREMENT BY value equal to the number of servers in the configuration, with a different MINVALUE for each one.

    For example, MINVALUE /INCREMENT BY values for a bidirectional, two-database configuration would be as follows:

    Database1, set the MINVALUE at 1 with an INCREMENT BY of 2.

    Database2, set the MINVALUE at 2 with an INCREMENT BY of 2.

    For example, MINVALUE /INCREMENT BY values for a multi-directional, three-database configuration would be as follows:

    Database1, set the MINVALUE at 1 with an INCREMENT BY of 3.

    Database2, set the MINVALUE at 2 with an INCREMENT BY of 3.

    Database3, set the MINVALUE at 3 with an INCREMENT BY of 3.