Mapping Local and Remote Peers Correctly

The names of the tables that underlie the DBFS file systems are generated internally and dynamically.

Continuing with the preceding example, there are:

  • Two nodes (Node 1 and Node 2 in the example).

  • Four stores: two on each node (FS1 and FS2 in the example).

  • Eight underlying tables: two for each store (a table and a ptable). These tables must be identified, specified in Extract TABLE statements, and mapped in Replicat MAP statements.

  1. To identify the table names that back each file system, issue the following query. (Substitute your store names for FS1 and FS2.)

    The output looks like the following examples.

  2. Identify the tables that are locally read-write to Extract by creating the following TABLE statements in the Extract parameter files. (Substitute your pluggable database names, schema names, and table names as applicable.)
  3. Link changes on each remote file system to the corresponding local file system by creating the following MAP statements in the Replicat parameter files. (Substitute your pluggable database, schema and table names.)

    This mapping captures and replicates local read-write source tables to remote read-only peer tables:

    • file system changes made to FS1 on Node 1 propagate to FS1 on Node 2.

    • file system changes made to FS2 on Node 2 propagate to FS2 on Node1.

    Changes to the file systems can be made through the DBFS ContentAPI (package DBMS_DBFS_CONTENT) of the database or through dbfs_client mounts and conventional file systems tools.

    All changes are propagated in both directions.

    • A user at the virtual root of the DBFS namespace on each system sees identical content.

    • For mutable operations, users use the /local sub-directory on each system.

    • For read operations, users can use either of the /local or /remote sub-directories, depending on whether they want to see local or remote content.

Example 13-6

select fs.store_name, tb.table_name, tb.ptable_name
from table(dbms_dbfs_sfs.listTables) tb,
table(dbms_dbfs_sfs.listfile systems) fs
where   fs.schema_name = tb.schema_name
and fs.table_name = tb.table_name
and fs.store_name in ('FS1', 'FS2')
;

Example 13-7 Example output: Node 1 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_100   SFS$_FSTP_100
FS2            SFS$_FST_118   SFS$_FSTP_118

Example 13-8 Example output: Node 2 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_101   SFS$_FSTP_101
FS2            SFS$_FST_119   SFS$_FSTP_119

Example 13-9 Node1

TABLE [container.]schema.SFS$_FST_100
TABLE [container.]schema.SFS$_FSTP_100;

Example 13-10 Node2

TABLE [container.]schema.SFS$_FST_119
TABLE [container.]schema.SFS$_FSTP_119;

Example 13-11 Node1

MAP [container.]schema.SFS$_FST_119, TARGET [container.]schema.SFS$_FST_118;
MAP [container.]schema.SFS$_FSTP_119, TARGET [container.]schema.SFS$_FSTP_118

Example 13-12 Node2

MAP [container.]schema.SFS$_FST_100, TARGET [container.]schema.SFS$_FST_101;MAP [container.]schema.SFS$_FSTP_100, TARGET [container.]schema.SFS$_FSTP_101;