2 Oracle Database
This chapter includes the following sections:
2.1 Introduction
Oracle Data Integrator (ODI) seamlessly integrates data in an Oracle Database. All Oracle Data Integrator features are designed to work best with the Oracle Database engine, including reverse-engineering, changed data capture, data quality, and mappings.
2.1.1 Concepts
The Oracle Database concepts map the Oracle Data Integrator concepts as follows: An Oracle Instance corresponds to a data server in Oracle Data Integrator. Within this instance, a schema maps to an Oracle Data Integrator physical schema. A set of related objects within one schema corresponds to a data model, and each table, View or synonym will appear as an ODI datastore, with its attributes, columns and constraints.
Oracle Data Integrator uses Java Database Connectivity (JDBC) to connect to Oracle database instance.
2.1.2 Knowledge Modules
Oracle Data Integrator provides the Knowledge Modules (KM) listed in the following table for handling Oracle data. The KMs use Oracle specific features. It is also possible to use the generic SQL KMs with the Oracle Database. See Generic SQL for more information.
Table 2-1 Oracle KMs
Knowledge Module | Description |
---|---|
RKM Oracle |
Reverse-engineers tables, views, columns, primary keys, non unique indexes and foreign keys. |
JKM Oracle Consistent |
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers. |
JKM Oracle Consistent (Update Date) |
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables. |
JKM Oracle Simple |
Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers. |
JKM Oracle to Oracle Consistent (OGG Online) |
Creates and manages the ODI CDC framework infrastructure when using Oracle GoldenGate for CDC. See Oracle GoldenGate for more information. |
CKM Oracle |
Checks data integrity against constraints defined on an Oracle table. |
LKM File to Oracle (EXTERNAL TABLE) |
Loads data from a file to an Oracle staging area using the EXTERNAL TABLE SQL Command. |
LKM File to Oracle (SQLLDR) |
Loads data from a file to an Oracle staging area using the SQL*Loader command line utility. |
LKM MSSQL to Oracle (BCP SQLLDR) |
Loads data from a Microsoft SQL Server to Oracle database (staging area) using the BCP and SQL*Loader utilities. |
LKM Oracle BI to Oracle (DBLINK) |
Loads data from any Oracle BI physical layer to an Oracle target database using database links. See Oracle Business Intelligence Enterprise Edition for more information. |
LKM Oracle to Oracle (DBLINK) |
Loads data from an Oracle source database to an Oracle staging area database using database links. |
LKM Oracle to Oracle Pull (DB Link) |
Loads data from an Oracle source database to an Oracle staging area database using database links. It does not create a view in the source database. It also does not creates the synonym in the staging database. Built-in KM. |
LKM Oracle to Oracle Push (DB Link) |
Loads and integrates data into Oracle target table using database links. It does not create the synonym in the staging database. Any settings in the IKM would be ignored. Built-in KM. |
LKM Oracle to Oracle (datapump) |
Loads data from an Oracle source database to an Oracle staging area database using external tables in the datapump format. |
LKM SQL to Oracle |
Loads data from any ANSI SQL-92 source database to an Oracle staging area. |
LKM SAP BW to Oracle (SQLLDR) |
Loads data from SAP BW systems to an Oracle staging using SQL*Loader utilities. See the Application Adapters Guide for Oracle Data Integrator for more information. |
LKM SAP ERP to Oracle (SQLLDR) |
Loads data from SAP ERP systems to an Oracle staging using SQL*Loader utilities. See the Application Adapters Guide for Oracle Data Integrator for more information. |
IKM Oracle Incremental Update |
Integrates data in an Oracle target table in incremental update mode. Supports Flow Control. |
IKM Oracle Incremental Update (MERGE) |
Integrates data in an Oracle target table in incremental update mode, using a MERGE statement. Supports Flow Control. |
IKM Oracle Incremental Update (PL SQL) |
Integrates data in an Oracle target table in incremental update mode using PL/SQL. Supports Flow Control. |
IKM Oracle Insert |
Integrates data into an Oracle target table in append mode. The data is loaded directly in the target table with a single INSERT SQL statement. Built-in KM. |
IKM Oracle Update |
Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single UPDATE SQL statement. Built-in KM. |
IKM Oracle Merge |
Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single MERGE SQL statement. Built-in KM. |
IKM Oracle Multi-Insert |
Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI). This IKM can be utilized in a single mapping to load multiple targets. Built-in KM. |
IKM Oracle Multi Table Insert |
Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI). Supports Flow Control. |
IKM Oracle Slowly Changing Dimension |
Integrates data in an Oracle target table used as a Type II Slowly Changing Dimension. Supports Flow Control. |
IKM Oracle Spatial Incremental Update |
Integrates data into an Oracle (9i or above) target table in incremental update mode using the MERGE DML statement. This module supports the SDO_GEOMETRY datatype. Supports Flow Control. |
IKM Oracle to Oracle Control Append (DBLINK) |
Integrates data from one Oracle instance into an Oracle target table on another Oracle instance in control append mode. Supports Flow Control. This IKM is typically used for ETL configurations: source and target tables are on different Oracle instances and the mapping's staging area is set to the logical schema of the source tables or a third schema. |
SKM Oracle |
Generates data access Web services for Oracle databases. For information about how to use this SKM, see Generating and Deploying Data Servicesin the Administering Oracle Data Integrator. |
2.2 Installation and Configuration
Make sure you have read the information in this section before you start using the Oracle Knowledge Modules:
2.2.1 System Requirements and Certifications
Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.
The list of supported platforms and versions is available on Oracle Technical Network (OTN):
http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html.
2.2.2 Technology Specific Requirements
Some of the Knowledge Modules for Oracle use specific features of this database. This section lists the requirements related to these features.
2.2.2.1 Using the SQL*Loader Utility
This section describes the requirements that must be met before using the SQL*Loader utility with Oracle database.
-
The Oracle Client and the SQL*Loader utility must be installed on the machine running the Oracle Data Integrator Agent.
-
The server names defined in the Topology must match the Oracle TNS name used to access the Oracle instances.
-
A specific log file is created by SQL*Loader. We recommend looking at this file in case of error. Control Files (CTL), Log files (LOG), Discard Files (DSC) and Bad files (BAD) are placed in the work directory defined in the physical schema of the source files.
-
Using the DIRECT mode requires that Oracle Data integrator Agent run on the target Oracle server machine. The source file must also be on that machine.
2.2.2.2 Using External Tables
This section describes the requirements that must be met before using external tables in Oracle database.
-
The file to be loaded by the External Table command needs to be accessible from the Oracle instance. This file must be located on the file system of the server machine or reachable from a Unique Naming Convention path (UNC path) or stored locally.
-
For performance reasons, it is recommended to install the Oracle Data Integrator Agent on the target server machine.
2.2.2.3 Using Oracle Wallet
Oracle Wallet provides a simple and easy method to manage database credentials across multiple domains.
Attention:
This section applies only to Data Integration Platform Cloud.This section describes the requirements that are necessary for creating and managing wallet in an Oracle database environment.
Note:
For more details on creating and managing wallets, refer to Managing Oracle Wallets section of Enterprise User Security Administrator's Guide.This environment provides all the necessary commands and libraries, including $ORACLE_HOME/oracle_common/bin/mkstore
command.
-
Oracle recommends you to create and manage Wallet in a database environment. Often this task is completed by a database administrator and provided for use to the client. You can also install the Oracle Client Runtime package to provide the necessary commands and libraries to create and manage Oracle Wallet.
Create a wallet on the client using the command:
mkstore –wrl <wallet_location> -create
, where:wallet_location
is the path to the directory where you want to create and store the wallet.For Example —
mkstore -wrl /scratch/ewallet - createCredential jdbc:oracle;thin:@kkm00ebs.in.oracle.com:1523:oditest odiUser odi
where-
kkm00ebs.in.oracle.com
is the host name -
1523
is the port number -
oditest
is the SID -
odiUser
denotes the user name -
odi
denotes the password for the user
Note:
You can store multiple credentials for multiple databases in a single client wallet. You cannot store multiple credentials (for logging into multiple schema) for the same database in the same wallet. If you have multiple login credentials for the same database, then they must be stored in separate wallets. -
-
To add database login credentials to an existing client wallet, use the command:
mkstore –wrl <wallet_location> -createCredential <db_connect_string> <username> <password>
where-
wallet_location
is the path to the directory where you have created the wallet -
db_connect_string
must be identical to the connection string that you specify in the URL used in the datasource definition (the part of the string that follows the @).It can be either the short form or the long form of the URL.
-
For Example —
orjdbc:oracle:thin:@kkm00ebs.in.oracle.com:1523/oditest
-
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost-scan)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice)))
For Example —(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<hostname or ipaddress>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<db service>)) (security=(ssl_server_cert_dn="<certificate_info>")) )
-
-
Note:
Oracle supports two types of wallets: 1. Password protected wallets (ewallet.p12) and 2. Auto login wallets (cwallet.sso) is used for retrieving connection details to establish secure connection to an instance by verifying certificate details associated with instance for secure connection.
2.2.3 Connectivity Requirements
This section lists the requirements for connecting to an Oracle Database.
JDBC Driver
Oracle Data Integrator is installed with a default version of the Oracle Type 4 JDBC driver. This drivers directly uses the TCP/IP network layer and requires no other installed component or configuration.
It is possible to connect an Oracle Server through the Oracle JDBC OCI Driver, or even using ODBC. For performance reasons, it is recommended to use the Type 4 driver.
Connection Information
You must ask the Oracle DBA the following information:
-
Network Name or IP address of the machine hosting the Oracle Database.
-
Listening port of the Oracle listener.
-
Name of the Oracle Instance (SID) or Service Name
-
Login and password of an Oracle User.
2.3 Setting up the Topology
Setting up the Topology consists of:
2.3.1 Creating an Oracle Data Server
An Oracle data server corresponds to an Oracle Database Instance connected with a specific Oracle user account. This user will have access to several schemas in this instance, corresponding to the physical schemas in Oracle Data Integrator created under the data server.
2.3.1.1 Creation of the Data Server
Create a data server for the Oracle technology using the standard procedure, as described in Creating a Data Server of Administering Oracle Data Integrator. This section details only the fields required or specific for defining an Oracle data server:
2.3.2 Creating an Oracle Physical Schema
Create an Oracle physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.
Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.
2.4 Setting Up an Integration Project
Setting up a project using the Oracle Database follows the standard procedure. See Creating an Integration Project of the Developing Integration Projects with Oracle Data Integrator.
It is recommended to import the following knowledge modules into your project for getting started with Oracle Database:
-
RKM Oracle
-
CKM Oracle
-
LKM SQL to Oracle
-
LKM File to Oracle (SQLLDR)
-
LKM File to Oracle (EXTERNAL TABLE)
-
IKM Oracle Incremental Update
2.5 Creating and Reverse-Engineering an Oracle Model
This section contains the following topics:
2.5.1 Create an Oracle Model
Create an Oracle Model using the standard procedure, as described in Creating a Model of Developing Integration Projects with Oracle Data Integrator.
2.5.2 Reverse-engineer an Oracle Model
Oracle supports both Standard reverse-engineering - which uses only the abilities of the JDBC driver - and Customized reverse-engineering, which uses a RKM to retrieve the structure of the objects directly from the Oracle dictionary.
In most of the cases, consider using the standard JDBC reverse engineering for starting. Standard reverse-engineering with Oracle retrieves tables, views, columns, primary keys, and references.
Consider switching to customized reverse-engineering for retrieving more metadata. Oracle customized reverse-engineering retrieves the table and view structures, including columns, primary keys, alternate keys, indexes, check constraints, synonyms, and references.
Standard Reverse-Engineering
To perform a Standard Reverse-Engineering on Oracle use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator.
Customized Reverse-Engineering
To perform a Customized Reverse-Engineering on Oracle with a RKM, use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator. This section details only the fields specific to the Oracle technology:
In the Reverse Engineer tab of the Oracle Model, select the KM: RKM Oracle.<project name>
.
2.6 Setting up Changed Data Capture
The ODI Oracle Knowledge Modules support the Changed Data Capture feature. See Using Changed Data of Developing Integration Projects with Oracle Data Integrator for details on how to set up journalizing and how to use captured changes.
Oracle Journalizing Knowledge Modules support Simple Journalizing and Consistent Set Journalizing. The Oracle JKMs use either triggers or Oracle Streams to capture data changes on the source tables.
Oracle Data Integrator provides the Knowledge Modules listed in Table 2-2 for journalizing Oracle tables.
Table 2-2 Oracle Journalizing Knowledge Modules
KM | Notes |
---|---|
JKM Oracle Consistent |
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers. |
JKM Oracle Consistent (Update Date) |
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables. |
JKM Oracle Simple |
Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers. |
Note that it is also possible to use Oracle GoldenGate to consume changed records from an Oracle database. See Oracle GoldenGate for more information.
Using the Streams JKMs
The Streams KMs work with the default values. The following are the recommended settings:
-
By default, the AUTO_CONFIGURATION KM option is set to Yes. If set to Yes, the KM provides automatic configuration of the Oracle database and ensures that all prerequisites are met. As this option automatically changes the database initialization parameters, it is not recommended to use it in a production environment. You should check the Create Journal step in the Oracle Data Integrator execution log to detect configurations tasks that have not been performed correctly (Warning status).
-
By default, the CONFIGURATION_TYPE option is set to
Low Activity
. Leave this option if your database is having a low transactional activity.Set this option to
Standalone
for installation on a standalone database such as a development database or on a laptop.Set this option to
High Activity
if the database is intensively used for transactional processing. -
By default, the STREAMS_OBJECT_GROUP option is set to
CDC
. The value entered is used to generate object names that can be shared across multiple CDC sets journalized with this JKM. If the value of this option is CDC, the naming rules listed in Table 2-3 will be applied.Note that this option can only take upper case ASCII characters and must not exceed 15 characters.
Table 2-3 Naming Rules Example for the CDC Group Name
CDC Group Naming Convention Capture Process
ODI_CDC_C
Queue
ODI_CDC_Q
Queue Table
ODI_CDC_QT
Apply Process
ODI_CDC_A
-
VALIDATE enables extra steps to validate the correct use of the KM. This option checks various requirements without configuring anything (for configuration steps, please see AUTO_CONFIGURATION option). When a requirement is not met, an error message is written to the log and the execution of the JKM is stopped in error.
By default, this option is set to Yes in order to provide an easier use of this complex KM out of the box
Using the Update Date JKM
This JKM assumes that a column containing the last update date exists in all the journalized tables. This column name is provided in the UPDATE_DATE_COL_NAME knowledge module option.
2.7 Setting up Data Quality
Oracle Data Integrator provides the CKM Oracle for checking data integrity against constraints defined on an Oracle table. See Flow Control and Static Control in Developing Integration Projects with Oracle Data Integrator.
Oracle Data Integrator provides the Knowledge Module listed in Table 2-4 to perform a check on Oracle. It is also possible to use the generic SQL KMs. See Generic SQL for more information.
Table 2-4 Check Knowledge Modules for Oracle Database
Recommended KM | Notes |
---|---|
CKM Oracle |
Uses Oracle's Rowid to identify records |
2.8 Designing a Mapping
You can use Oracle as a source, staging area or a target of a mapping. It is also possible to create ETL-style mappings based on the Oracle technology.
The KM choice for a mapping or a check determines the abilities and performance of this mapping or check. The recommendations in this section help in the selection of the KM for different situations concerning an Oracle data server.
2.8.1 Loading Data from and to Oracle
Oracle can be used as a source, target or staging area of a mapping. The LKM choice in the Mapping's Loading Knowledge Module tab to load data between Oracle and another type of data server is essential for the performance of a mapping.
2.8.1.1 Loading Data from Oracle
The following KMs implement optimized methods for loading data from an Oracle database to a target or staging area database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.
Target or Staging Area Technology | KM | Notes |
---|---|---|
Oracle |
LKM Oracle to Oracle (dblink) |
Creates a view on the source server, and synonyms on this view on the target server. |
Oracle |
LKM Oracle to Oracle Push (DB Link) |
Creates a view on the source server, but does not create synonyms on this view on the target server. This KM ignores any settings on the IKM. Built-in KM. |
Oracle |
LKM Oracle to Oracle Pull (DB Link) |
Does not create a view on the source server, or the synonyms on this view on the target server. Built-in KM. |
Oracle |
LKM Oracle to Oracle (datapump) |
Uses external tables in the datapump format. |
2.8.1.2 Loading Data to Oracle
The following KMs implement optimized methods for loading data from a source or staging area into an Oracle database. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technology involved.
Source or Staging Area Technology | KM | Notes |
---|---|---|
Oracle |
LKM Oracle to Oracle (dblink) |
Views created on the source server, synonyms on the target. |
Oracle |
LKM Oracle to Oracle Push (DB Link) |
Views not created on the source server, synonyms created on the target. Built-in KM. |
Oracle |
LKM Oracle to Oracle Pull (DB Link) |
Views not created on the source server, synonyms not created on the target. Built-in KM. |
SAP BW |
LKM SAP BW to Oracle (SQLLDR) |
Uses Oracle's bulk loader. File cannot be Staging Area. |
SAP ERP |
LKM SAP ERP to Oracle (SQLLDR) |
Uses Oracle's bulk loader. File cannot be Staging Area. |
Files |
LKM File to Oracle (EXTERNAL TABLE) |
Loads file data using external tables. |
Files |
LKM File to Oracle (SQLLDR) |
Uses Oracle's bulk loader. File cannot be Staging Area. |
Oracle |
LKM Oracle to Oracle (datapump) |
Uses external tables in the datapump format. |
Oracle BI |
LKM Oracle BI to Oracle (DBLINK) |
Creates synonyms for the target staging table and uses the OBIEE populate command. |
MSSQL |
LKM MSSQL to Oracle (BCP-SQLLDR) |
Unloads data from SQL Server using BCP, loads data into Oracle using SQL*Loader. |
All |
LKM SQL to Oracle |
Faster than the Generic LKM (Uses Statistics) |
2.8.2 Integrating Data in Oracle
The data integration strategies in Oracle are numerous and cover several modes. The IKM choice in the Mapping's Physical diagram determines the performances and possibilities for integrating.
The following KMs implement optimized methods for integrating data into an Oracle target. In addition to these KMs, you can also use the Generic SQL KMs.
Mode | KM | Note |
---|---|---|
Update |
IKM Oracle Incremental Update |
Optimized for Oracle. Supports Flow Control. |
Update |
IKM Oracle Update |
Optimized for Oracle. Oracle UPDATE statement KM. Built-in KM. |
Update |
IKM Oracle Merge |
Optimized for Oracle. Oracle MERGE statement KM. Built-in KM. |
Update |
IKM Oracle Spatial Incremental Update |
Supports SDO_GEOMETRY datatypes. Supports Flow Control. |
Update |
IKM Oracle Incremental Update (MERGE) |
Recommended for very large volumes of data because of bulk set-based MERGE feature. Supports Flow Control. |
Update |
IKM Oracle Incremental Update (PL SQL) |
Use PL/SQL and supports long and blobs in incremental update mode. Supports Flow Control. |
Specific |
IKM Oracle Slowly Changing Dimension |
Supports type 2 Slowly Changing Dimensions. Supports Flow Control. |
Specific |
IKM Oracle Multi Table Insert |
Supports multi-table insert statements. Supports Flow Control. |
Append |
IKM Oracle to Oracle Control Append (DBLINK) |
Optimized for Oracle using DB*Links. Supports Flow Control. |
Append |
IKM Oracle Insert |
Optimized for Oracle. Oracle INSERT statement KM. Built-in KM. Supports Flow Control. |
Append |
IKM Oracle Multi-Insert |
Optimized for Oracle. Oracle multi-target INSERT statement KM, applied to each target. Built-in KM. |
Using Slowly Changing Dimensions
For using slowly changing dimensions, make sure to set the Slowly Changing Dimension value for each column of the Target datastore. This value is used by the IKM Oracle Slowly Changing Dimension to identify the Surrogate Key, Natural Key, Overwrite or Insert Column, Current Record Flag and Start/End Timestamps columns.
Using Multi Table Insert
The IKM Oracle Multi Table Insert is used to integrate data from one source into one to many Oracle target tables with a multi-table insert statement. This IKM must be used in mappings that are sequenced in a Package. This Package must meet the following conditions:
-
The first mapping of the Package must have a temporary target and the KM option DEFINE_QUERY set to
YES
.This first mapping defines the structure of the SELECT clause of the multi-table insert statement (that is the source flow).
-
Subsequent mappings must source from this temporary datastore and have the KM option IS_TARGET_TABLE set to
YES
. -
The last mapping of the Package must have the KM option EXECUTE set to
YES
in order to run the multi-table insert statement. -
Do not set Use Temporary Mapping as Derived Table (Sub-Select) to
true
on any of the mappings.
If large amounts of data are appended, consider to set the KM option OPTIMIZER_HINT to /*+ APPEND *
/.
Using Spatial Datatypes
To perform incremental update operations on Oracle Spatial datatypes, you need to declare the SDO_GEOMETRY datatype in the Topology and use the IKM Oracle Spatial Incremental Update. When comparing two columns of SDO_GEOMETREY datatype, the GEOMETRY_TOLERANCE option is used to define the error margin inside which the geometries are considered to be equal. See the Oracle Spatial User's Guide and Reference, for more information.
2.8.3 Designing an ETL-Style Mapping
See Creating a Mapping in Developing Integration Projects with Oracle Data Integrator for generic information on how to design mappings. This section describes how to design an ETL-style mapping where the staging area is Oracle database or any ANSI-92 compliant database and the target on Oracle database.
In an ETL-style mapping, ODI processes the data in a staging area, which is different from the target. Oracle Data Integrator provides two ways for loading the data from an Oracle staging area to an Oracle target:
Depending on the KM strategy that is used, flow and static control are supported.
Using a Multi-connection IKM
A multi-connection IKM allows updating a target where the staging area and sources are on different data servers.
Oracle Data Integrator provides the following multi-connection IKM for handling Oracle data: IKM Oracle to Oracle Control Append (DBLINK). You can also use the generic SQL multi-connection IKMs. See Generic SQL for more information.
See Table 2-5 for more information on when to use a multi-connection IKM.
To use a multi-connection IKM in an ETL-style mapping:
-
Create a mapping with the staging area on Oracle or an ANSI-92 compliant technology and the target on Oracle using the standard procedure as described in Creating a Mapping of Developing Integration Projects with Oracle Data Integrator. This section describes only the ETL-style specific steps.
-
Change the staging area for the mapping to the logical schema of the source tables or a third schema. See Configuring Execution Locations of Developing Integration Projects with Oracle Data Integrator for information about how to change the staging area.
-
In the Physical diagram, select an access point. The Property Inspector opens for this object.
-
In the Loading Knowledge Module tab, select an LKM to load from the source(s) to the staging area. See Table 2-5 to determine the LKM you can use.
-
Optionally, modify the KM options.
-
In the Physical diagram, select the Target by clicking its title. The Property Inspector opens for this object.
In the Integration Knowledge Module tab, select an ETL multi-connection IKM to load the data from the staging area to the target. See Table 2-5 to determine the IKM you can use.
Note the following when setting the KM options:
-
For IKM Oracle to Oracle Control Append (DBLINK)
-
If large amounts of data are appended, set the KM option OPTIMIZER_HINT to
/*+ APPEND */
. -
Set AUTO_CREATE_DB_LINK to
true
to create automatically db link on the target schema. If AUTO_CREATE_DB_LINK is set tofalse
(default), the link with this name should exist in the target schema. -
If you set the options FLOW_CONTROL and STATIC_CONTROL to Yes, select a CKM in the Check Knowledge Module tab. If FLOW_CONTROL is set to Yes, the flow table is created on the target.
-
Using an LKM and a mono-connection IKM
If there is no dedicated multi-connection IKM, use a standard exporting LKM in combination with a standard mono-connection IKM. The exporting LKM is used to load the flow table from the staging area to the target. The mono-connection IKM is used to integrate the data flow into the target table.
Oracle Data Integrator supports any ANSI SQL-92 standard compliant technology as a source of an ETL-style mapping. Staging area and the target are Oracle.
See Table 2-5 for more information on when to use the combination of a standard exporting LKM and a mono-connection IKM.
To use an LKM and a mono-connection IKM in an ETL-style mapping:
Table 2-5 KM Guidelines for ETL-Style Mappings with Oracle Data
Source | Staging Area | Target | Exporting LKM | IKM | KM Strategy | Comment |
---|---|---|---|---|---|---|
ANSI SQL-92 standard compliant |
Oracle |
Oracle |
NA |
IKM Oracle to Oracle Control Append (DBLINK) |
Multi-connection IKM |
Use this KM strategy to:
Supports flow and static control. |
ANSI SQL-92 standard compliant |
Oracle or any ANSI SQL-92 standard compliant database |
Oracle or any ANSI SQL-92 standard compliant database |
NA |
IKM SQL to SQL Incremental Update |
Multi-connection IKM |
Allows an incremental update strategy with no temporary target-side objects. Use this KM if it is not possible to create temporary objects in the target server. The application updates are made without temporary objects on the target, the updates are made directly from source to target. The configuration where the flow table is created on the staging area and not in the target should be used only for small volumes of data. Supports flow and static control |
Oracle |
Oracle |
Oracle |
LKM to Oracle to Oracle (DBLINK) |
IKM Oracle Slowly Changing Dimension |
LKM + standard IKM |
na |
Oracle |
Oracle |
Oracle |
LKM to Oracle to Oracle (DBLINK) |
IKM Oracle Incremental Update |
LKM + standard IKM |
na |
Oracle |
Oracle |
Oracle |
LKM to Oracle to Oracle (DBLINK) |
IKM Oracle Incremental Update (MERGE) |
LKM + standard IKM |
na |
2.9 Troubleshooting
This section provides information on how to troubleshoot problems that you might encounter when using Oracle Knowledge Modules. It contains the following topics:
2.9.1 Troubleshooting Oracle Database Errors
Errors appear often in Oracle Data Integrator in the following way:
java.sql.SQLException: ORA-01017: invalid username/password; logon denied at ... at ... ...
the java.sql.SQLExceptioncode
simply indicates that a query was made to the database through the JDBC driver, which has returned an error. This error is frequently a database or driver error, and must be interpreted in this direction.
Only the part of text in bold must first be taken in account. It must be searched in the Oracle documentation. If its contains an error code specific to Oracle, like here (in red), the error can be immediately identified.
If such an error is identified in the execution log, it is necessary to analyze the SQL code send to the database to find the source of the error. The code is displayed in the description tab of the erroneous task.
2.9.2 Common Problems and Solutions
This section describes common problems and solutions.
-
ORA-12154 TNS:could not resolve service name
TNS alias resolution. This problem may occur when using the OCI driver, or a KM using database links. Check the configuration of the TNS aliases on the machines.
-
ORA-02019 connection description for remote database not found
You use a KM using non existing database links. Check the KM options for creating the database links.
-
ORA-00900 invalid SQL statement
ORA-00923 FROM Keyword not found where expected
The code generated by the mapping, or typed in a procedure is invalid for Oracle. This is usually related to an input error in the mapping, filter of join. The typical case is a missing quote or an unclosed bracket.
A frequent cause is also the call made to a non SQL syntax, like the call to an Oracle stored procedure using the syntax
EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2).
The valid SQL call for a stored procedure is:
BEGIN SCHEMA.PACKAGE.PROC(PARAM1, PARAM2); END;
The syntax
EXECUTE SCHEMA.PACKAGE.PROC(PARAM1, PARAM2)
is specific to SQL*PLUS, and do not work with JDBC. -
ORA-00904 invalid column name
Keying error in a mapping/join/filter. A string which is not a column name is interpreted as a column name, or a column name is misspelled.
This error may also appear when accessing an error table associated to a datastore with a recently modified structure. It is necessary to impact in the error table the modification, or drop the error tables and let Oracle Data Integrator recreate it in the next execution.
-
ORA-00903 invalid table name
The table used (source or target) does not exist in the Oracle schema. Check the mapping logical/physical schema for the context, and check that the table physically exists on the schema accessed for this context.
-
ORA-00972 Identifier is too Long
There is a limit in the object identifier in Oracle (usually 30 characters). When going over this limit, this error appears. A table created during the execution of the mapping went over this limit. and caused this error (see the execution log for more details).
Check in the topology for the oracle technology, that the maximum lengths for the object names (tables and columns) correspond to your Oracle configuration.
-
ORA-01790 expression must have same datatype as corresponding expression
You are trying to connect two different values that can not be implicitly converted (in a mapping, a join...). Use the explicit conversion functions on these values.
Footnote Legend
Footnote 1: In ODI, you can directly upload connection details from a credential file (auto login wallet) or a password protected wallet file (ewallet.p12)