2.6 Examples of Using Oracle Data Pump Export

You can use these common scenario examples to learn how you can create parameter files and use Oracle Data Pump Export to move your data.

2.6.1 Performing a Table-Mode Export

This example shows a table-mode export, specified using the TABLES parameter.

In this example, the Data Pump export command performs a table export of the tables employees and jobs from the human resources (hr) schema.

Because user hr is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=YES parameter indicates that an Export log file of the operation is not generated.

Example 2-1 Performing a Table-Mode Export

expdp hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=YES

2.6.2 Data-Only Unload of Selected Tables and Rows

This example shows data-only unload of selected tables and rows.

The example shows the contents of a parameter file (exp.par), which you can use to perform a data-only unload of all the tables in the human resources (hr) schema, except for the tables countries and regions. Rows in the employees table are unloaded that have a department_id other than 50. The rows are ordered by employee_id.

You can issue the following command to execute the exp.par parameter file:

> expdp hr PARFILE=exp.par

This export performs a schema-mode export (the default mode), but the CONTENT parameter effectively limits the export to an unload of just the table data. The DBA previously created the directory object dpump_dir1, which points to the directory on the server where user hr is authorized to read and write export dump files. The dump file dataonly.dmp is created in dpump_dir1.

Example 2-2 Data-Only Unload of Selected Tables and Rows

DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"

2.6.3 Estimating Disk Space Needed in a Table-Mode Export

This example shows how to estimate the disk space needed in a table-mode export.

In this example, the ESTIMATE_ONLY parameter is used to estimate the space that is consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees, departments, and locations.

The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.

Example 2-3 Estimating Disk Space Needed in a Table-Mode Export

> expdp hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=YES TABLES=employees, 
departments, locations LOGFILE=estimate.log

2.6.4 Performing a Schema-Mode Export

This example shows you how to perform a schema-mode export.

The example shows a schema-mode export of the hr schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS parameter on the command line, unless you are specifying more than one schema or a schema other than your own.

Example 2-4 Performing a Schema Mode Export

> expdp hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log

2.6.5 Performing a Parallel Full Database Export

To learn how to perform a parallel full database export, use this example to understand the syntax.

The example shows a full database Export that can use 3 parallel processes (worker or parallel query worker processes).

Example 2-5 Parallel Full Export

> expdp hr FULL=YES DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull

Because this export is a full database export, all data and metadata in the database is exported. Dump files full101.dmp, full201.dmp, full102.dmp, and so on, are created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. For best performance, Oracle recommends that you place the dump files on separate input/output (I/O) channels. Each file is up to 2 gigabytes in size, as necessary. Initially, up to three files are created. If needed, more files are created. The job and Data Pump control process table has a name of expfull. The log file is written to expfull.log in the dpump_dir1 directory.

2.6.6 Using Interactive Mode to Stop and Reattach to a Job

This example shows you how to use interactive mode to stop and reattach to a job.

To start this example, reexecute the parallel full export described here:

Performing a Parallel Full Database Export

While the export is running, press Ctrl+C. This keyboard command starts the interactive-command interface of Data Pump Export. In the interactive interface, logging to the terminal stops, and the Export prompt is displayed.

After the job status is displayed, you can issue the CONTINUE_CLIENT command to resume logging mode and restart the expfull job.

Export> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client.

Example 2-6 Stopping and Reattaching to a Job

At the Export prompt, issue the following command to stop the job:

Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state, and exits the client.

To reattach to the job you just stopped, enter the following command:

> expdp hr ATTACH=EXPFULL

2.6.7 Continuing Table Loads when LOB Data Type Corruptions are Found

This example shows you how to address ORA-1555 errors with an Oracle Data Pump export job.

Suppose you have a table with large object datatype (LOB) columns (BLOB, CLOB, NCLOB or BFILE) that has a large number of rows that require several hours to complete. During the export job, Oracle Data Pump encounters an ORA-1555 error ("ORA-01555: snapshot too old: rollback segment number with name "" too small"). Oracle recommends that you do not attempt to export partial rows, because attempting this workaround can cause further corruption. Instead, before exporting the LOB table, Oracle recommends that you use the script in this example to find the corrupted LOB rowids, and then repair the table by either emptying those rows before export, or excluding those rows from the export.

Example 2-7 Finding LOB Corruption in Large Tables

Use this script to verify LOB corruption, and to find and store the corrupted LOB IDs in a temporary table so that you can then exclude them from the table you want to export.

  1. Create a new temporary table for storing all rowids called corrupt_lobs
    SQL> create table corrupt_lobs (corrupt_rowid rowid, err_num number);
        
  2. Make a desc on the large table <TABLE_NAME> containing the LOB column:
    DESC <TABLE_NAME>
    
    Name         Null?     Type 
    ----------   --------- ------------
    <COL1>       NOT NULL  NUMBER 
    <LOB_COLUMN>           BLOB 

    Run the following PL/SQL block:

    declare 
      error_1578 exception; 
      error_1555 exception; 
      error_22922 exception; 
      pragma exception_init(error_1578,-1578); 
      pragma exception_init(error_1555,-1555); 
      pragma exception_init(error_22922,-22922); 
      n number; 
    begin 
      for cursor_lob in (select rowid r, <LOB_COLUMN> from <TABLE_NAME>) loop 
      begin 
        n:=dbms_lob.instr(cursor_lob.<LOB_COLUMN>,hextoraw('889911')); 
      exception 
        when error_1578 then 
          insert into corrupt_lobs values (cursor_lob.r, 1578);
          commit; 
        when error_1555 then 
          insert into corrupt_lobs values (cursor_lob.r, 1555);
          commit; 
        when error_22922 then 
          insert into corrupt_lobs values (cursor_lob.r, 22922);
          commit; 
        end; 
      end loop; 
    end; 
    / 

    The result of this PL/SQL script is that all rowids of the corrupted LOBs will be inserted into the newly created corrupt_lobs table.

  3. Resolve the issue with the corrupted LOB rowids either by emptying the corrupted LOB rows, or by exporting the table without the corrupted LOB rows.

    • Empty the corrupted LOB rows

      With this option, you run a SQL statement to empty the rows. In this example, the rows that you select are BLOB or BFILE columns, so we use EMPTY_BLOB. For CLOB and NCLOB columns, or use EMPTY_CLOB:

      SQL> update <TABLE_NAME> set <LOB_COLUMN> = empty_blob() 
           where rowid in (select corrupt_rowid from corrupt_lobs);
    • Export the table without the corrupted LOB rows

      Use this script with values for your environment:

      $ expdp system/<PASSWORD> DIRECTORY=my_dir DUMPFILE=<dump_name>.dmp LOGFILE=<logfile_name>.log TABLES=<SCHEMA_NAME>.<TABLE_NAME> QUERY=\"WHERE rowid NOT IN \(\'<corrupt_rowid>\'\)\"

For more information about identifying and resolving ORA-1555 errors, and distinguishing them from LOB segment issues due to LOB PCTVERSION or RETENTION being low, see the My Oracle Support document "Export Receives The Errors ORA-1555 ORA-22924 ORA-1578 ORA-22922 (Doc ID 787004.1"