Category Archives: data pump

Oracle 12c – User Oracle Data Pump enhancements

Transportable Feature:

 

Data Pump has full transportable export option while performing full database backup with the transportable export flag set to TRANSPORTABLE=ALWAYS along with the FULL parameter.  This causes two types of movement methods to be used:  Non-Transportable tablespace – Used for SYSTEM and SYSAUX which cannot be transported and have metadata and data moved using direct-path and external tables; and Transportable tablespaces – Only metadata is placed in dump file, data is moved when the data files are copied to target database.   This new data pump features can be used for moving non-CDB databases into PDB or PDB into PDB.  This can reduce the time by avoiding data being unloaded and reloaded and indexes do not have to be recreated.  The full transport is idea for moving database to new computer.

 

Restrictions of full transportable exports:

  1. DATAFUMP_EXP_FULL_DATABASE privilege is required.
  2. Default tablespace of user performing export cannot be one of the transportable tablespaces.
  3. If tables or columns being transported are encrypted, the encrypted password must be provided using the parameter ENCRYPTION_PASSWORD.
  4. If encryption is used on source and target database, OS must have same endianness.
  5. If source and target have different endianness, you must convert before convert data before transporting.
  6. Full transportable export is not restart able.
  7. All objects must have all their storage in segments in administrative non-transportable tablespaces or in user-defined transportable tables. Data cannot be in both kinds of tablespaces.

 

Compression feature:

  1. Compression on import – new option for impdp and DBMS_DATAPUMP package which allows for compression changes during import operations. TRANSFORM parameter of impdp has new TABLE_COMPRESSION_CLAUSE when set to NONE, the table gets default tablespace compression or set to valid compression type will compress on import.
  2. Compression on export – new option for expdp and DBMS_DATAPUMP package to control the degree of compression into file. COMPRESSION clause can be used for entire operation, data, metadata, or no compression.  By default only metadata is compressed.

 

 

Export View as a table:

EXPDP now exports views as a table rather than the view definition.  This allows the impdp utility to import the view data as a table.

 

LOGTIME:

New clause LOGTIME allows messages during export and import to be timestamped. The LOGTIME lause values are:

  1. NONE – no timestamps
  2. STATUS – timestamps on status messages.
  3. ALL – timestamps on all messages.

 

 

Audit Commands:

 

Data Pump operations can now be audited by creating audit policy on component action DATAPUMP.  IE

 

SQL> create audit policy audit_data_pump actions component=datapump all;

Audit policy created.

SQL> audit policy audit_data_pump by scott;

Audit succeeded.

SQL>

 

No Logging Options:

New option DISABLE_ARCHIVE_LOGGING for TRANSFORM parameter in impdp and DBMS_DATAPUMP package, allows for disabling of redo logging when loading data into tables and indexes.  DBA should take full RMAN backup after completion.   Redo is still taken for create and alter statements except create index.

 

Security:

ENCRYPT_PWD_PROMPT parameter has been added to expdp and impdp which tells data pump to prompt for password during operations.

SecureFiles LOB as Default – new option for impdp and DBMS_DATAPUMP package requires all LOB to be created as SecureFiles LOBS.   By default LOBs are recreated as exported.

 

 

Larry Catt

OCP

Oracle 12c – Export Data Pump

The Oracle EXPDP utility moves data outside of the database.   The utility can be executed in a command line fashion, by a parameter file, or interactive-command mode.

  • Command-line – Enables you to specify most of the export parameters
  • Parameter File – Allows you to specify command line parameters in a PARFILE
  • Interactive-Command – Displays an export prompt and you enter various commands.

The EXPDP utility can create up to 4 external file types.   This file types include the actual data captured by EXPDP utility, logs of current progress, SQL statements and actual complete data files depending on the parameters passed to the utility.   The four possible output files of an EXPDP operation are:

  • Dump File – Contains data and metadata being moved.
  • Log File – Records log of operation.
  • SQL files – Records output of SQLFILE operation.
  • Data files – Used with transportable import and displays files using DATA_FILES

The EXPDP utility can be run in various modes to extract data at different levels of the database.     This includes exporting the entire database, a single user contents, a single and/or multiple tables, and various types of tablespaces.    The various types of EXPDP modes are:

  • Full – Entire database is unloaded. Need DATAPUMP_EXP_FULL_DATABASE role. FULL parameter
  • SCHEMA – Default export mode. If you have DATAPUMP_EXP_FULL_DATABASE role you give list of schemas if not only your own.   SCHEMAS parameter.
  • TABLE – Only specific tables, partitions and dependent objects are unloaded. TABLES parameter.
  • TABLESPACES – All contents of a tablespaces are unloaded. If you have full privilege you get everything if not only your own objects.   TABLESPACES parameter.
  • TRANSPORTABLE TABLESPACE – Only metadata of tables and associated objects are unloaded. Data is moved in another operation.  TRANSPORT_TABLESPACES parameter.

Finally EXPDP utility can except the following parameters to define and control  its operations.    The acceptable parameters are:

  • ABORT_STEP – Stop the job after it is initialized or at the indicated object. Valid values are -1 or N where N is zero or greater. N corresponds to the object’s process order number in the master table.
  • ACCESS_METHOD – Instructs Export to use a particular method to unload data. Valid keyword values are: [AUTOMATIC], DIRECT_PATH and EXTERNAL_TABLE.
  • ATTACH – Attach to an existing job. For example, ATTACH=job_name.
  • CLUSTER – Utilize cluster resources and distribute workers across the Oracle RAC [YES].
  • COMPRESSION – Reduce the size of a dump file. Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.
  • COMPRESSION_ALGORITHM – Specify the compression algorithm that should be used. Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.
  • CONTENT – Specifies data to unload. Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.
  • DATA_OPTIONS – Data layer option flags. Valid keyword values are: XML_CLOBS.
  • DIRECTORY – Directory object to be used for dump and log files.
  • DUMPFILE – Specify list of destination dump file names [expdat.dmp]. For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
  • ENCRYPTION – Encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
  • ENCRYPTION_ALGORITHM – Specify how encryption should be done. Valid keyword values are: [AES128], AES192 and AES256.
  • ENCRYPTION_MODE – Method of generating encryption key. Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].
  • ENCRYPTION_PASSWORD – Password key for creating encrypted data within a dump file.
  • ENCRYPTION_PWD_PROMPT – Specifies whether to prompt for the encryption password [NO]. Terminal echo will be suppressed while standard input is read.
  • ESTIMATE – Calculate job estimates. Valid keyword values are: [BLOCKS] and STATISTICS.
  • ESTIMATE_ONLY – Calculate job estimates without performing the export [NO].
  • EXCLUDE – Exclude specific object types. For example, EXCLUDE=SCHEMA:”=’HR'”.
  • FILESIZE – Specify the size of each dump file in units of bytes.
  • FLASHBACK_SCN – SCN used to reset session snapshot.
  • FLASHBACK_TIME – Time used to find the closest corresponding SCN value.
  • FULL – Export entire database [NO].
  • HELP – Display Help messages [NO].
  • INCLUDE – Include specific object types. For example, INCLUDE=TABLE_DATA.
  • JOB_NAME – Name of export job to create.
  • KEEP_MASTER – Retain the master table after an export job that completes successfully [NO].
  • LOGFILE – Specify log file name [export.log].
  • LOGTIME – Specifies that messages displayed during export operations be timestamped. Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.
  • METRICS – Report additional job information to the export log file [NO].
  • NETWORK_LINK – Name of remote database link to the source system.
  • NOLOGFILE – Do not write log file [NO].
  • PARALLEL – Change the number of active workers for current job.
  • PARFILE – Specify parameter file name.
  • QUERY – Predicate clause used to export a subset of a table. For example, QUERY=employees:”WHERE department_id > 10″.
  • REMAP_DATA – Specify a data conversion function. For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
  • REUSE_DUMPFILES – Overwrite destination dump file if it exists [NO].
  • SAMPLE – Percentage of data to be exported.
  • SCHEMAS – List of schemas to export [login schema].
  • SERVICE_NAME – Name of an active Service and associated resource group to constrain Oracle RAC resources.
  • SOURCE_EDITION – Edition to be used for extracting metadata.
  • STATUS – Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
  • TABLES – Identifies a list of tables to export. For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
  • TABLESPACES – Identifies a list of tablespaces to export.
  • TRANSPORTABLE – Specify whether transportable method can be used. Valid keyword values are: ALWAYS and [NEVER].
  • TRANSPORT_FULL_CHECK – Verify storage segments of all tables [NO].
  • TRANSPORT_TABLESPACES – List of tablespaces from which metadata will be unloaded.
  • VERSION – Version of objects to export. Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.
  • VIEWS_AS_TABLES – Identifies one or more views to be exported as tables. For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.

—————–

 

The following commands are valid while in interactive mode.

Note: abbreviations are allowed.

 

  • ADD_FILE – Add dumpfile to dumpfile set.
  • CONTINUE_CLIENT – Return to logging mode. Job will be restarted if idle.
  • EXIT_CLIENT – Quit client session and leave job running.
  • FILESIZE – Default filesize (bytes) for subsequent ADD_FILE commands.
  • HELP – Summarize interactive commands.
  • KILL_JOB – Detach and delete job.
  • PARALLEL – Change the number of active workers for current job.
  • REUSE_DUMPFILES – Overwrite destination dump file if it exists [NO].
  • START_JOB – Start or resume current job. Valid keyword values are: SKIP_CURRENT.
  • STATUS – Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
  • STOP_JOB – Orderly shutdown of job execution and exits the client. Valid keyword values are: IMMEDIATE.

 

 

 

The following is a simple example of the EXPDP utility in exporting a single table.

 

  1. Logon to your server as the oracle software owner.

 

[larry@linux2 ~]$ su – oracle

Password:

Last login: Tue Jan 10 09:26:15 EST 2017 on pts/2

Enter database to use:

1 – ORCL

2 – CDB1

Option >

1

[oracle@linux2 ~]$

 

  1. Logon to your database as the user scott default password tiger.

 

[oracle@linux2 ~]$ sqlplus scott/tiger

 

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 11 08:56:36 2017

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

Last Successful login time: Fri Jan 06 2017 11:01:31 -05:00

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

SQL> create directory dmpdir as ‘/home/oracle’;

 

Directory created.

 

SQL>

 

 

  1. Log out of SQLPLUS and change directory to oracle software owners home directory.

 

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$ cd

[oracle@linux2 ~]$

 

  1. Execute the command below to export the table scott.emp to file scott_emp.dmp.

 

expdp scott/tiger DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir tables=emp

 

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$ cd

[oracle@linux2 ~]$ expdp scott/tiger DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir tables=emp

 

Export: Release 12.1.0.2.0 – Production on Wed Jan 11 09:04:43 2017

 

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

Starting “SCOTT”.”SYS_EXPORT_TABLE_01″:  scott/******** DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir tables=emp

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER

. . exported “SCOTT”.”EMP”                               8.820 KB      15 rows

Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:

  /home/oracle/scott_emp.dmp

Job “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully completed at Wed Jan 11 09:06:01 2017 elapsed 0 00:01:16

 

[oracle@linux2 ~]$

 

  1. Now the table scott_emp.dmp contains all the data and metadata from scott.emp table.

 

[oracle@linux2 ~]$ ls -l scott*

-rw-r—–. 1 oracle dba 204800 Jan 11 09:06 scott_emp.dmp

[oracle@linux2 ~]$

 

  1. This completes simple example of EXPDP utility.

 

 

Larry Catt

OCP

Oracle 12c – Network-based Data Pump operations

NETWORK_LINK parameter is used with impdp as part of import operation.  Data is moved directly using SQL.  Select statement moves data from remote database over the link and INSERT clause inserts into local database.  No dump files are involved.  Current user database links are not supported.   Only PUBLIC, FIXED user or connected user Database Links can be used.

 

Restrictions of Network Link:

  1. Tables with LONG or LONG RAW that reside in admin tablespaces are not supported.
  2. Auditing cannot be enabled on admin tablespaces when performing full transportable export.
  3. Source and Target database cannot be more than two versions off.

 

This article will give a basic demonstration of Network-based Data Pump operations by extracting the scott.emp table from a database ORCL and importing it into a pluggable database PDB1.

 

  1. Logon to your Pluggable database server as the oracle software owner.

 

[larry@linux2 ~]$ su – oracle

Password:

Last login: Wed Jan 11 14:15:49 EST 2016 on pts/1

Enter database to use:

1 – ORCL

2 – CDB1

Option >

2

[oracle@linux2 ~]$

 

  1. Verify that you can reach the remote database ORCL via the network with tnsping command.

 

[oracle@linux2 ~]$ tnsping orcl

TNS Ping Utility for Linux: Version 12.1.0.2.0 – Production on 11-JAN-2016 14:43:20

Copyright (c) 1997, 2014, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.30.15.98)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))

OK (10 msec)

[oracle@linux2 ~]$

 

  1. Logon local PDB1 database as sysdba.

 

[oracle@linux2 ~]$ sqlplus / as sysdba@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 11 15:12:43 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

SQL>

 

  1. Change container to pdb1.

 

SQL> alter session set container=pdb1;

Session altered.

SQL>

 

  1. Create user pdb1_user with dba and create session privileges.

 

SQL> create user pdb1_user identified by password;

User created.

SQL> grant create session to pdb1_user;

Grant succeeded.

SQL> grant dba to pdb1_user;

Grant succeeded.

SQL>

 

  1. Connect to pdb1 as pdb1_user.

 

SQL> connect pdb1_user/password@pdb1;

Connected.

SQL>

 

  1. Create a public database link to ORCL

 

create public database link orcl_db

connect to scott identified by tiger

using ‘orcl’;

 

SQL> create public database link orcl_db

connect to scott identified by tiger

using ‘orcl’;

SQL>   2    3

 

Database link created.

 

SQL>

 

  1. Create a directory object dmpdir for any necessary data pump files. In this example the log file will be placed in this directory object.

 

SQL> create directory dmpdir as ‘/home/oracle’;

Directory created.

SQL>

 

  1. Test the database link to the database ORCL by querying the table scott.emp.

 

SQL> select * from scott.emp@orcl_db fetch first 2 rows only;

     EMPNO ENAME                          JOB                                MGR

———- —————————— ————————— ———-

    DEPTNO HIREDATE         SAL       COMM

———- ——— ———- ———-

      1111 DOE                            SALES                                1

        10 01-JAN-17       1000       1000

      7369 SMITH                          CLERK                             7902

        20 17-DEC-80        800

SQL>

 

  1. Now exit out of pdb1.

 

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$

 

  1. Due to new security restriction you may have to raise the privileges of the user scott in extraction database. Logon to database orcl as the sys and grant scott dba privilege.

 

[oracle@linux2 ~]$ sqlplus sys/password@orcl as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 12 07:58:33 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

SQL> grant dba to scott;

Grant succeeded.

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$

 

  1. Now from the command prompt, execute the impdp utility

 

impdp pdb1_user/password@pdb1 directory=dmpdir LOGFILE=scott_emp_transfer.log network_link=ORCL_DB remap_table=scott.emp:emp

 

[oracle@linux2 ~]$ impdp pdb1_user/password@pdb1 directory=dmpdir LOGFILE=scott_emp_transfer.log network_link=ORCL_DB remap_table=scott.emp:emp

Import: Release 12.1.0.2.0 – Production on Thu Jan 12 07:59:02 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

FLASHBACK automatically enabled to preserve database integrity.

Starting “PDB1_USER”.”SYS_IMPORT_SCHEMA_01″:  pdb1_user/********@pdb1 directory=dmpdir LOGFILE=scott_emp_transfer.log network_link=ORCL_DB remap_table=scott.emp:emp

Estimate in progress using BLOCKS method…

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 448 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

. . imported “SCOTT”.”BONUS”                                 14 rows

. . imported “SCOTT”.”DEPT”                                   4 rows

. . imported “SCOTT”.”EMP”                                   15 rows

. . imported “SCOTT”.”EMP2″                                  15 rows

. . imported “SCOTT”.”FB_TEST”                                1 rows

. . imported “SCOTT”.”SALGRADE”                               5 rows

. . imported “SCOTT”.”TEST”                                   3 rows

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

Job “PDB1_USER”.”SYS_IMPORT_SCHEMA_01″ successfully completed at Thu Jan 12 08:16:20 2016 elapsed 0 00:17:15

 

[oracle@linux2 ~]$

 

  1. Now logon to pdb1 as the user pdb1_user and query the new table emp.

 

sqlplus pdb1_user/password@pdb1

select * from pdb1_user.emp fetch first 2 rows only;

[oracle@linux2 ~]$ sqlplus pdb1_user/password@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 12 08:49:13 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Jan 12 2016 07:59:02 -05:00

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

SQL> select * from scott.emp fetch first 2 rows only;

 

     EMPNO ENAME      JOB              MGR     DEPTNO HIREDATE         SAL

———- ———- ——— ———- ———- ——— ———-

      COMM

———-

      1111 DOE        SALES              1         10 01-JAN-17       1000

      1000

      7369 SMITH      CLERK           7902         20 17-DEC-80        800

SQL>

 

  1. This completes using Network based data pump.

 

Larry Catt

OCP

Oracle 12c – Import data pump

IMPDP utility is used to read in the contents of an EXPDP dump file.   It can be run in three modes:

  • Command-line – Enables you to specify most of the export parameters
  • Parameter File – Allows you to specify command line parameters in a PARFILE
  • Interactive-Command – Displays an import prompt and you enter various commands.

IMPDP utility can accept the following parameters during execution:

  • ABORT_STEP – Stop the job after it is initialized or at the indicated object.Valid values are -1 or N where N is zero or greater.N corresponds to the object’s process order number in the master table.
  • ACCESS_METHOD – Instructs Import to use a particular method to load data.Valid keyword values are: [AUTOMATIC], CONVENTIONAL, DIRECT_PATHand EXTERNAL_TABLE.
  • ATTACH – Attach to an existing job. For example, ATTACH=job_name.
  • CLUSTER – Utilize cluster resources and distribute workers across the Oracle RAC [YES].
  • CONTENT – Specifies data to load. Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
  • DATA_OPTIONS – Data layer option flags. Valid keywords are: DISABLE_APPEND_HINT and SKIP_CONSTRAINT_ERRORS.
  • DIRECTORY – Directory object to be used for dump, log and SQL files.
  • DUMPFILE – List of dump files to import from [expdat.dmp]. For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
  • ENCRYPTION_PASSWORD – Password key for accessing encrypted data within a dump file. Not valid for network import jobs.
  • ENCRYPTION_PWD_PROMPT – Specifies whether to prompt for the encryption password [NO]. Terminal echo will be suppressed while standard input is read.
  • ESTIMATE – Calculate network job estimates. Valid keywords are: [BLOCKS] and STATISTICS.
  • EXCLUDE – Exclude specific object types. For example, EXCLUDE=SCHEMA:”=’HR'”.
  • FLASHBACK_SCN – SCN used to reset session snapshot.
  • FLASHBACK_TIME – Time used to find the closest corresponding SCN value.
  • FULL – Import everything from source [YES].
  • HELP – Display help messages [NO].
  • INCLUDE – Include specific object types. For example, INCLUDE=TABLE_DATA.
  • JOB_NAME – Name of import job to create.
  • KEEP_MASTER – Retain the master table after an import job that completes successfully [NO].
  • LOGFILE – Log file name [import.log].
  • LOGTIME – Specifies that messages displayed during import operations be timestamped. Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.
  • MASTER_ONLY – Import just the master table and then stop the job [NO].
  • METRICS – Report additional job information to the import log file [NO].
  • NETWORK_LINK – Name of remote database link to the source system.
  • NOLOGFILE – Do not write log file [NO].
  • PARALLEL – Change the number of active workers for current job.
  • PARFILE – Specify parameter file.
  • PARTITION_OPTIONS – Specify how partitions should be transformed. Valid keywords are: DEPARTITION, MERGE and [NONE].
  • QUERY – Predicate clause used to import a subset of a table. For example, QUERY=employees:”WHERE department_id > 10″.
  • REMAP_DATA – Specify a data conversion function. For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
  • REMAP_DATAFILE – Redefine data file references in all DDL statements.
  • REMAP_SCHEMA – Objects from one schema are loaded into another schema.
  • REMAP_TABLE – Table names are remapped to another table. For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.
  • REMAP_TABLESPACE – Tablespace objects are remapped to another tablespace.
  • REUSE_DATAFILES – Tablespace will be initialized if it already exists [NO].
  • SCHEMAS – List of schemas to import.
  • SERVICE_NAME – Name of an active Service and associated resource group to constrain Oracle RAC resources.
  • SKIP_UNUSABLE_INDEXES – Skip indexes that were set to the Index Unusable state.
  • SOURCE_EDITION – Edition to be used for extracting metadata.
  • SQLFILE – Write all the SQL DDL to a specified file.
  • STATUS – Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
  • STREAMS_CONFIGURATION – Enable the loading of Streams metadata [YES].
  • TABLE_EXISTS_ACTION – Action to take if imported object already exists. Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
  • TABLES – Identifies a list of tables to import. For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
  • TABLESPACES – Identifies a list of tablespaces to import.
  • TARGET_EDITION – Edition to be used for loading metadata.
  • TRANSFORM – Metadata transform to apply to applicable objects. Valid keywords are: DISABLE_ARCHIVE_LOGGING, INMEMORY, INMEMORY_CLAUSE,  LOB_STORAGE, OID, PCTSPACE, SEGMENT_ATTRIBUTES, STORAGE, and TABLE_COMPRESSION_CLAUSE.
  • TRANSPORTABLE – Options for choosing transportable data movement. Valid keywords are: ALWAYS and [NEVER].  Only valid in NETWORK_LINK mode import operations.
  • TRANSPORT_DATAFILES – List of data files to be imported by transportable mode.
  • TRANSPORT_FULL_CHECK – Verify storage segments of all tables [NO]. Only valid in NETWORK_LINK mode import operations.
  • TRANSPORT_TABLESPACES – List of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK mode import operations.
  • VERSION – Version of objects to import. Valid keywords are: [COMPATIBLE], LATEST, or any valid database version.  Only valid for NETWORK_LINK and SQLFILE.
  • VIEWS_AS_TABLES – Identifies one or more views to be imported as tables. For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.  Note that in network import mode, a table name may be appended  to the view name.

 

——————————————————————————

 

The following commands are valid while in interactive mode.

Note: abbreviations are allowed.

 

  • CONTINUE_CLIENT – Return to logging mode. Job will be restarted if idle.
  • EXIT_CLIENT – Quit client session and leave job running.
  • HELP – Summarize interactive commands.
  • KILL_JOB – Detach and delete job.
  • PARALLEL – Change the number of active workers for current job.
  • START_JOB – Start or resume current job. Valid keywords are: SKIP_CURRENT.
  • STATUS – Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
  • STOP_JOB – Orderly shutdown of job execution and exits the client. Valid keywords are: IMMEDIATE.

 

 

This article will give a brief example of IMPDP utility by importing an export of table scott.emp into a new table scott.emp2.

 

 

 

  1. Logon to your server as the oracle software owner.

 

[larry@linux2 ~]$ su – oracle

Password:

Last login: Tue Jan 10 09:26:15 EST 2016 on pts/2

Enter database to use:

1 – ORCL

2 – CDB1

Option >

1

[oracle@linux2 ~]$

 

  1. Logon to your database as the user scott default password tiger.

 

[oracle@linux2 ~]$ sqlplus scott/tiger

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 11 08:56:36 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Fri Jan 06 2016 11:01:31 -05:00

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

SQL> create directory dmpdir as ‘/home/oracle’;

Directory created.

SQL>

 

 

  1. Log out of SQLPLUS and change directory to oracle software owners home directory.

 

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$ cd

[oracle@linux2 ~]$

 

  1. Execute the command below to export the table scott.emp to file scott_emp.dmp. NOTE: you may have some errors with name of constraints from original table, but the data will be imported.

 

impdp scott/tiger DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir REMAP_TABLE=scott.emp:emp2

 

[oracle@linux2 ~]$ impdp scott/tiger DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir REMAP_TABLE=scott.emp:emp2

Import: Release 12.1.0.2.0 – Production on Wed Jan 11 10:03:13 2016

 

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

 

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

Master table “SCOTT”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded

Starting “SCOTT”.”SYS_IMPORT_FULL_01″:  scott/******** DUMPFILE=scott_emp.dmp DIRECTORY=dmpdir REMAP_TABLE=scott.emp:emp2

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

. . imported “SCOTT”.”EMP2″                              8.820 KB      15 rows

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

ORA-31684: Object type INDEX:”SCOTT”.”BT_SCOTT_EMP_JOB” already exists

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

ORA-31684: Object type CONSTRAINT:”SCOTT”.”PK_EMP” already exists

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

ORA-31684: Object type REF_CONSTRAINT:”SCOTT”.”FK_DEPTNO” already exists

Processing object type TABLE_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX

ORA-31684: Object type INDEX:”SCOTT”.”BM_SCOTT_EMP_JOB” already exists

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER

Job “SCOTT”.”SYS_IMPORT_FULL_01″ completed with 4 error(s) at Wed Jan 11 10:04:08 2016 elapsed 0 00:00:53

 

[oracle@linux2 ~]$

 

  1. Now logon to SQLPLUS as the user scott and describe your new table emp2.

 

[oracle@linux2 ~]$ sqlplus scott/tiger

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 11 10:06:05 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 11 2016 10:03:13 -05:00

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

SQL> desc emp2

 Name                                      Null?    Type

 —————————————– ——– —————————-

 EMPNO                                              NUMBER(4)

 ENAME                                              VARCHAR2(10)

 JOB                                                VARCHAR2(9)

 MGR                                                NUMBER(4)

 DEPTNO                                             NUMBER(2)

 HIREDATE                                           DATE

 SAL                                                NUMBER(7,2)

 COMM                                               NUMBER(7,2)

 

SQL> select count(*) from emp2;

 

  COUNT(*)

———-

        15

 

SQL>

 

  1. This completes a simple example of IMPDP utility in Oracle 12c.

 

Larry Catt

OCP

Oracle 12c – Monitor a Data Pump Job

You can use expdp and impdp utilities to connect to a currently running job and displays the real time processes.  In interactive command mode you can request job status.   You can also see status and information through Data Pump views: DBA_DATAPUMP_JOBS which shows all active DP jobs and master tables associated with active jobs and DATA_DATAPUMP_SESSIONS which IDs user sessions attached to DP jobs, this is helpful in determining why DP job is stopped.    Additionally, V$SESSION_LONGOPS shows DP jobs transferring data and maintains an entry to give current progress.  V$SESSION _LONGOPS will gives estimated transfer size and size transferred.

 

The following commands are applicable when using impdp or expdp in Interactive mode:

ADD_FILE – Add additional dump files.

CONTINUE_CLIENT – Exit interactive mode and enter logging mode.

EXIT_CLIENT – Stop the import or export client session, but leave the job running.

KILL_JOB – Detach all currently attached client sessions and terminate the current job.

PARALLEL – Increase or decrease the number of active worker processes for the current job.

START_JOB – Restart a stopped job to which you are attached.

STATUS – Display detailed status for the current job and/ or set status interval.

STOP_JOB – Stop the current job for later restart.

 

This article will give an example of using expdp utility in interactive mode to attach to a current running job to view progress.

 

 

  1. Create two Logons to your server as the oracle software owner.

 

[larry@linux2 ~]$ su – oracle

Password:

Last login: Wed Jan 11 08:17:10 EST 2016 on pts/1

Enter database to use:

1 – ORCL

2 – CDB1

Option >

1

[oracle@linux2 ~]$

 

  1. Logon to your database as the user sysdba and create the directory dmpdir.

 

[oracle@linux2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 11 14:20:29 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

SQL> create directory dmpdir as ‘/home/oracle’;

Directory created.

SQL>

 

 

  1. Log out of SQLPLUS and change directory to oracle software owners home directory.
 
   

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

[oracle@linux2 ~]$ cd

[oracle@linux2 ~]$

 

  1. Execute the command:

 

expdp \”/ as sysdba\”  dumpfile=orcl.dmp directory=dmpdir full=y job_name=orcl_full

 

[oracle@linux2 ~]$ expdp \”/ as sysdba\”  dumpfile=orcl.dmp directory=dmpdir full=y job_name=orcl_full

Export: Release 12.1.0.2.0 – Production on Wed Jan 11 14:34:49 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

FLASHBACK automatically enabled to preserve database integrity.

Starting “SYS”.”ORCL_FULL”:  “/******** AS SYSDBA” dumpfile=orcl.dmp directory=dmpdir full=y job_name=orcl_full

Estimate in progress using BLOCKS method…

Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA

Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA

 

  1. Switch to your secondary screen

 

expdp \”/ as sysdba\”   attach=orcl_full status=1

 

[oracle@linux2 ~]$ expdp \”/ as sysdba\”   attach=orcl_full status=1

Export: Release 12.1.0.2.0 – Production on Wed Jan 11 14:35:18 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

 

Job: ORCL_FULL

  Owner: SYS

  Operation: EXPORT

  Creator Privs: TRUE

  GUID: 45D7B1AF11C77B0FE053620F1E0A3C9A

  Start Time: Wednesday, 11 January, 2016 14:34:52

  Mode: FULL

  Instance: orcl

  Max Parallelism: 1

  Timezone: +00:00

  Timezone version: 18

  Endianness: LITTLE

  NLS character set: WE8MSWIN1252

  NLS NCHAR character set: AL16UTF16

  EXPORT Job Parameters:

  Parameter Name      Parameter Value:

     CLIENT_COMMAND        “/******** AS SYSDBA” dumpfile=orcl.dmp directory=dmpdir full=y job_name=orcl_full

  State: EXECUTING

  Bytes Processed: 0

  Current Parallelism: 1

  Job Error Count: 0

  Dump File: /home/oracle/orcl.dmp

    bytes written: 4,096

 

Worker 1 Status:

  Instance ID: 1

  Instance name: orcl

  Host name: linux2.localdomain

  Process Name: DW00

  State: EXECUTING

  Object Schema: SYS

  Object Type: DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT

  Completed Objects: 3

  Worker Parallelism: 1

 

Export>

 

  1. Once in interactive mode you can continue to ask for updates on job. with status execution.
  2. This completes monitoring of data pump job from EXPDP or IMPDP utility.

 

 

Larry Catt

OCP

Oracle 12c – Load and Unload Data with Data Pump

Oracle Data Pump utility is high speed movement of data and metadata between DB.  It replaces export/import utility in previous versions.

 

Three parts of Oracle Data Pump:

  1. EXPDP and IMPDP – command line utility that uses DBMS_DATAPUMP package to execute export and import operations.
  2. DBMS_DATAPUMP – package used to all or part of DB from one DB to another.
  3. DBMS_METADATA – facility to extract, manipulation, and recreation of dictionary metadata.

 

MASTER TABLE – is used by data pump to track jobs while in progress.  It is implemented as a master table within database. Users must have CREATE TABLE privilege.  Master Table will have the same name as the job that created it and cannot have the same name as an existing table or view.  Data in Master table can be used to restart the job.

 

MASTER PROCESS – this process is created for each export/import job.  It controls the entire job (communication, creating and running worker processes and logging).

 

WORKER PROCESSES – executed by Master Processes to perform actual processing.  Can have multiple worker processes running in parallel.

 

 

Monitor a Data Pump Job

You can use export/import utilities to connect to a running job.   In logging mode it displays the real time processes.  In interactive command mode you can request job status.  Optionally log file can be written during the job.  log file will contain: progress, errors, and completion status.   You can also see status and information through Data Pump views:

DBA_DATAPUMP_JOBS – It shows all active DP jobs and master tables not associated with active jobs.

DATA_DATAPUMP_SESSIONS – IDs user sessions attached to DP jobs.  Helpful in determining why DP job is stopped.

V$SESSION_LONGOPS –  DP jobs transferring data maintain an entry to give progress.  Gives estimated transfer size and size transferred.

 

V$SESSION_LONGOPS columns that are important to DP jobs are:

USERNAME- job owner

OPNAME – Job name

TARGET_DESC – Job operation

SOFAR – size data transferred thus far

TOTALWORK – size estimated to transfer.

UNITS – size of measurements (MB, GB, TB, etc)

MESSAGE – formatted message of type ‘job_nam: operation_name: nnn out of mmm MB done.’

 

EXPDP – moves data outside of the database and has multiple.   The utility can be executed in a command line fashion, by a parameter file, or interactive-command mode.

Command-line – Enables you to specify most of the export parameters

Parameter File – Allows you to specify command line parameters  in a PARFILE

Interactive-Command – Displays an export prompt and you enter various commands.

 

Data Pumps jobs manage:

Dump File – Contains data and metadata being moved.

Log File – Records log of operation.

SQL files – Records output of SQLFILE operation.

Data files – Used with transportable import and displays files using DATA_FILES

 

Different EXPDP modes:

Full – Entire database is unloaded.  Need DATAPUMP_EXP_FULL_DATABASE role. FULL parameter

SCHEMA – Default export mode.  If you have DATAPUMP_EXP_FULL_DATABASE role you give list of schemas if not only your own.   SCHEMAS parameter.

TABLE – Only specific tables, partitions and dependent objects are unloaded. TABLES parameter.

TABLESPACES – All contents of a tablespaces are unloaded.  If you have full privilege you get everything if not only your own objects.   TABLESPACES parameter.

TRANSPORTABLE TABLESPACE – Only metadata of tables and associated objects are unloaded.  Data is moved in another operation.  TRANSPORT_TABLESPACES parameter.

 

 

EXPDP Parameter

ATTACH – Attaches the client session to an existing export job and automatically places you in the interactive-command interface.

CONTENT – Enables you to filter what Export unloads: data only, metadata only, or both.

DIRECTORY – Specifies the default location to which Export can write the dump file set and the log file. DUMPFILE – Specifies the names, and optionally, the directory objects of dump files for an export job. ESTIMATE – Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes).

ESTIMATE_ONLY – Instructs Export to estimate the space that a job would consume, without actually performing the export operation.

EXCLUDE – Enables you to filter the metadata that is exported by specifying objects and object types to be excluded from the export operation.

FILESIZE – Specifies the maximum size of each dump file.

FULL – Specifies that you want to perform a full database mode export.

INCLUDE – Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode.

JOB_NAME – Used to identify the export job in subsequent actions.

LOGFILE – Specifies the name, and optionally, a directory, for the log file of the export job.

PARFILE – Specifies the name of an export parameter file.

QUERY – Allows you to specify a query clause that is used to filter the data that gets exported.

SCHEMAS – Specifies that you want to perform a schema-mode export.

TABLES – Specifies that you want to perform a table-mode export.

TABLESPACES – Specifies a list of tablespace names to be exported in tablespace mode. TRANSPORT_TABLESPACES — Specifies that you want to perform an export in transportable-tablespace mode. This parameter is used to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

TRANSPORTABLE — When set to ALWAYS, it indicates the transportable option should be used during the export. The default is NEVER.

 

impdp Parameters

REUSE_DATAFILES – Specifies whether the import job should reuse existing data files for tablespace creation. SQLFILE – Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.

STATUS – Specifies the frequency at which the job status display is updated.

TABLE_EXISTS_ACTION – Tells import what to do if the table it is trying to create already exists. REMAP_DATAFILE – Changes the name of the source data file to the target data file name in all SQL statements where the source data file is referenced.

REMAP_SCHEMA – Loads all objects from the source schema into a target schema.

REMAP_TABLE – Allows you to rename tables during an import operation.

REMAP_TABLESPACE – Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.

 

The following commands are applicable when using Interactive mode:

ADD_FILE – Add additional dump files.

CONTINUE_CLIENT – Exit interactive mode and enter logging mode.

EXIT_CLIENT – Stop the import or export client session, but leave the job running.

KILL_JOB – Detach all currently attached client sessions and terminate the current job.

PARALLEL – Increase or decrease the number of active worker processes for the current job.

START_JOB – Restart a stopped job to which you are attached.

STATUS – Display detailed status for the current job and/ or set status interval.

STOP_JOB – Stop the current job for later restart.

 

Network-based Data Pump operations

NETWORK_LINK parameter is used with impdp as part of import operation.  Data is moved directly using SQL.  Select statement moves data from remote database over the link and INSERT clause inserts into local database.  No dump files are involved.  Current user database links are not supported.   Only PUBLIC, FIXED user or connected user Database Links can be used.

 

Restrictions of Network Link:

  1. Tables with LONG or LONG RAW that reside in admin tablespaces are not supported.
  2. Auditing cannot be enabled on admin tablespaces when performing full transportable export.
  3. Source and Target database cannot be more than two versions off.

 

SQL*Loader

SQL*loader is a very good for loading data from non-Oracle databases, it is very flexible and can read large range of flat files.  SQL*Loader can perform the following functions:

  1. Load data across network if files on different machine.
  2. Load data from multiple files in same session.
  3. Load data into multiple tables in same session.
  4. Specify character set.
  5. Selectively load data by values.
  6. Manipulate data before loading.
  7. Generate unique sequential keys.
  8. Use OS to access data files.
  9. Load data from disk, tape or named pipe.
  10. Load arbitrarily complex object-related data.
  11. Use secondary data files for loading LOBs and collections.
  12. Use convential, direct path or external tables loads.

 

SQL*Loader can be used with or without control file

SQL*Loader makes use of the following files:

Control – Defines data format and control behaviors.

Data – One or more data files containing info to be loaded.

Log – Logs performance and errors.

Bad – Contains all records not loaded because of errors.

Discard – Contains all records discarded by control file identified bypass.

 

SQLLDR executable is used to invoke SQL*Loader.  you use parameters in the control its action in a PARFILE or as options.  If parameter appears in PARFILE and command line, command line is used.

 

Three sections of control files: PARFILE.

  1. Session wide info: global options, input data file location, data to be loaded.
  2. Contains INTO TABLE blocks (one or more)
  3. Optional third section can contain input data.

Two methods SQL*Loader can use are Conventional Path and Direct Load options.

Conventional Path – SQL*Loader creates INSERT statement for data to be loaded and passes it to SQL Parser.

Direct Load Path – SQL*Loader bypasses the SQL and parser and loads data straight into the target table.

 

Direct Path is much faster than Conventional path, but conventional path is more flexible.

 

Direct Path restrictions:

  1. Cannot run concurrently with other transactions against a target table.
  2. Triggers on table do not fire.
  3. Data is written above high water mark, even if space is available below.
  4. Clustered tables are not supported.
  5. Foreign Key constraints are disabled during load.

 

Larry Catt

OCP

Oracle 12c – Load and Unload Data with Data Pump Overview

Oracle Data Pump utility is high speed movement of data and metadata between DB.  It replaces export/import utility in previous versions.  Three parts of Oracle Data Pump are:

  1. EXPDP and IMPDP – command line utility that uses DBMS_DATAPUMP package to execute export and import operations.
  2. DBMS_DATAPUMP – package used to all or part of DB from one DB to another.
  3. DBMS_METADATA – facility to extract, manipulation, and recreation of dictionary metadata.

Data Pump utility runs as a job called the MASTER PROCESS for each data pump execution.   This master process can be stopped, restarted, and monitored throughout its execution.   A MASTER TABLE is used by data pump to track job while in progress and this master table is stored in the database data pump utility is operating against. The users executing the data pump command must have CREATE TABLE privilege.  The Master Table will have the same name as the job that created it and cannot have the same name as an existing table or view.  Data in Master table can be used to restart the job.   WORKER PROCESSES executed by Master Processes to perform actual processing of the data and metadata for the data pump utility.  At any given time a master process can have multiple worker processes running in parallel.

 

Larry Catt

OCP

Oracle 12c – Migrating Data by Using Oracle Data PUMP

You can use Data Pump to migrate an existing Oracle database into an empty 12c Database with or without transportable option.  This is known as full transportable export/import if transportable option is in use.  Non-transportable tablespaces like SYSTEM and SYSAUX cannot use transportable option and must use conventional data movement.

 

Considerations when using Data Pump to populate new 12c Oracle database.

  1. You can use Data Pump to do this but oracle recommends upgrading to 11.2.0.3 or higher to ensure that registered options and components are included in export.
  2. When doing this with 11.2.0.3 or higher set VERSION=12 option in data pump export. This is for transportable tablespaces.
  3. For Network-based full transportable imports use parameter FULL=YES, TANSPORTABLE=ALWAYS, and TANSPORT_DATAFILES=datafile_name for source db higher than 11.2.0.3 but lower then 12.1
  4. File-based full transportable imports only require use of TRANSPORT_DATAFILE=datafile_name.
  5. Default Data Pump directory object DATA_PUMP_DIR does not work with PDBs, you have to create an explicit directory object within the PDB for export and import operations.

 

 

Larry Catt

OCP