Recovery of CDB in Oracle 12c Multitenant Database

Oracle 12c multitenant database can use the Recovery Manager utility or EM Cloud control to recover a database from a previously taken backup set.   This procedure outlines the general steps to recover a database using the RMAN utility.

 

 

  1. Logon to your database server as the oracle software owner.

 

[root@linux2 etc]# su – oracle

Last login: Wed Nov 30 11:45:07 EST 2015 on pts/1

[oracle@linux2 ~]$

 

  1. Logon to RMAN utility with sysbackup privileges.

 

[oracle@linux2 ~]$ rman target ‘”/ as sysbackup”‘

Recovery Manager: Release 12.1.0.2.0 – Production on Thu Dec 1 09:26:07 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=898155058)

 

RMAN>

 

  1. Place the database in mount mode.

 

RMAN> shutdown immediate

Oracle instance shut down

 

RMAN> startup mount

connected to target database (not started)

Oracle instance started

database mounted

Total System Global Area     734003200 bytes

Fixed Size                     2928728 bytes

Variable Size                524292008 bytes

Database Buffers             201326592 bytes

Redo Buffers                   5455872 bytes

 

RMAN>

 

  1. Issue the command RESTORE DATABASE;

 

RMAN> restore database;

 

Starting restore at 01-DEC-15

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=22 device type=DISK

 

skipping datafile 5; already restored to file /u01/oradata/cdb1/pdbseed/system01.dbf

skipping datafile 7; already restored to file /u01/oradata/cdb1/pdbseed/sysaux01.dbf

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/cdb1/system01.dbf

…………….

…………….

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:45

Finished restore at 01-DEC-15

 

RMAN>

 

  1. Issue the command RECOVER DATABASE;

 

RMAN> RECOVER DATABASE;

 

Starting recover at 01-DEC-15

using channel ORA_DISK_1

 

starting media recovery

media recovery complete, elapsed time: 00:00:04

 

Finished recover at 01-DEC-15

 

RMAN>

 

 

  1. You can remove any database archive redo logs after they are no longer needed using command: RECOVER DATABASE DELETE ARCHIVELOG;

 

RMAN> RECOVER DATABASE DELETE ARCHIVELOG;

 

Starting recover at 01-DEC-15

using channel ORA_DISK_1

 

starting media recovery

media recovery complete, elapsed time: 00:00:00

 

Finished recover at 01-DEC-15

 

RMAN>

 

  1. Open the database for normal operation with command: alter database open;

 

RMAN> alter database open;

 

Statement processed

 

RMAN>

 

 

  1. This completes recovery of CDB database with all PDBs.

 

Larry Catt

OCP

Oracle 12c – Diagnose performance issues using ASH enhancements

Conventional ADDM uses AWR which are based on reports taken during intervals.  Some changes will not appear on this conventional ADDM because changes where not great enough to trigger new AWR analysis.  This is also not a real-time comparison of performance.   ASH Active Session History is real-time and had the following characteristics:

 

  1. Active Session History is a collection of real-time data stored in the SGA based on active session
  2. Can be viewed via the V$ACTIVE_SESSION_HISTORY view.
  3. EM Cloud has the ability to run ASH reports from:
    1. HOME PAGE à Performance Menu/Home à Logon with admin privilege à Under Average Active Sessions click ASH Reports à Enter Start and Stop time of report à click Generate.
  4. Reports are placed under Run ASH Report Page.
  5. ASH Reports are divided into the sections
    1. Top Events – top wait events
    2. Load Profile – Describes load during report period.
    3. Top SQL – order by amount of resource consumed by SQL statements.
    4. Top Sessions – order by sessions waiting for wait events.
    5. Top DB Objects/Files/Latches – order by top objects being used/accessed.
    6. Activity Over Time – Shows picks and values in performance over reporting period.
  6. EM Cloud also includes new ASH Analytics page, providing graphical view of ASH data.

 

Larry Catt

OCP

Oracle 12c Create stand-alone database(non-CDB and non-PDB)

In Oracle 12c, Container Databases were introduced which allowed for multiple separate database to be placed in a single oracle instance to reduce the amount of resource consumed.   However, it is still possible to create a traditional stand-alone database without the features or restriction of a CDB.  This procedure covers the creation of a 12c standalone database using Oracle Database Configuration Assistant in a Windows environment.

 

  1. Logon your windows server as the administrator.

 

  1. Open Oracle Database Configuration Assistant.

  1. At the Database Operation Page select “Create Database”

  1. At the Creation Mode Page select “Advanced Mode”

  1. At the Database Template Page select the Desired type of database, in This example we select “General Purpose or Transaction Processing”

  1. At the Database Identification Page enter the Global Database Name and SID, then unselect “Create as Container Database” and press Next Button.

  1. At the Management Options Page, select Configure Enterprise Manger (EM) Database Express if desired.

 

  1. At the Database Credentials Page, select Use the Same Administrative Password for All Accounts and enter a valid password. Additionally enter a password the administrative account on the windows system.

  1. On the Network Configuration Page, select the Listener to be used by the new database.

  1. At the Storage Location Page, select the appropriate storage for your database.

  1. At the Database Options page select nothing and Press Next button.

 

  1. At the Initialization Parameters Page, select the defaults and Press the Next button.

 

 

  1. At the Creation Options Page, select Create Database and Generate Database Creation Scripts (giving a valid destination directory) and press Next button.

  1. At the Summary Page, press the Finish button.

  1. The Progress Page, a confirmation of script creation will be displayed, once complete press OK.

  1. The CDB will now be created in Windows.

  1. A confirmation screen will appear once the install is completed with the EM information and confirmation of install, Press the Exit button to close.

  1. Once back at the Progress Page, press the Close button to terminate Database Configuration Assistant.

This completes creation of Oracle 12c stand-alone database in Windows environment with Database Configuration Assistant.

 

 

Larry Catt

OCP

Oracle 12c – Load and Unload Data with Data Pump

Oracle Data Pump utility is high speed movement of data and metadata between DB.  It replaces export/import utility in previous versions.

 

Three parts of Oracle Data Pump:

  1. EXPDP and IMPDP – command line utility that uses DBMS_DATAPUMP package to execute export and import operations.
  2. DBMS_DATAPUMP – package used to all or part of DB from one DB to another.
  3. DBMS_METADATA – facility to extract, manipulation, and recreation of dictionary metadata.

 

MASTER TABLE – is used by data pump to track jobs while in progress.  It is implemented as a master table within database. Users must have CREATE TABLE privilege.  Master Table will have the same name as the job that created it and cannot have the same name as an existing table or view.  Data in Master table can be used to restart the job.

 

MASTER PROCESS – this process is created for each export/import job.  It controls the entire job (communication, creating and running worker processes and logging).

 

WORKER PROCESSES – executed by Master Processes to perform actual processing.  Can have multiple worker processes running in parallel.

 

 

Monitor a Data Pump Job

You can use export/import utilities to connect to a running job.   In logging mode it displays the real time processes.  In interactive command mode you can request job status.  Optionally log file can be written during the job.  log file will contain: progress, errors, and completion status.   You can also see status and information through Data Pump views:

DBA_DATAPUMP_JOBS – It shows all active DP jobs and master tables not associated with active jobs.

DATA_DATAPUMP_SESSIONS – IDs user sessions attached to DP jobs.  Helpful in determining why DP job is stopped.

V$SESSION_LONGOPS –  DP jobs transferring data maintain an entry to give progress.  Gives estimated transfer size and size transferred.

 

V$SESSION_LONGOPS columns that are important to DP jobs are:

USERNAME- job owner

OPNAME – Job name

TARGET_DESC – Job operation

SOFAR – size data transferred thus far

TOTALWORK – size estimated to transfer.

UNITS – size of measurements (MB, GB, TB, etc)

MESSAGE – formatted message of type ‘job_nam: operation_name: nnn out of mmm MB done.’

 

EXPDP – moves data outside of the database and has multiple.   The utility can be executed in a command line fashion, by a parameter file, or interactive-command mode.

Command-line – Enables you to specify most of the export parameters

Parameter File – Allows you to specify command line parameters  in a PARFILE

Interactive-Command – Displays an export prompt and you enter various commands.

 

Data Pumps jobs manage:

Dump File – Contains data and metadata being moved.

Log File – Records log of operation.

SQL files – Records output of SQLFILE operation.

Data files – Used with transportable import and displays files using DATA_FILES

 

Different EXPDP modes:

Full – Entire database is unloaded.  Need DATAPUMP_EXP_FULL_DATABASE role. FULL parameter

SCHEMA – Default export mode.  If you have DATAPUMP_EXP_FULL_DATABASE role you give list of schemas if not only your own.   SCHEMAS parameter.

TABLE – Only specific tables, partitions and dependent objects are unloaded. TABLES parameter.

TABLESPACES – All contents of a tablespaces are unloaded.  If you have full privilege you get everything if not only your own objects.   TABLESPACES parameter.

TRANSPORTABLE TABLESPACE – Only metadata of tables and associated objects are unloaded.  Data is moved in another operation.  TRANSPORT_TABLESPACES parameter.

 

 

EXPDP Parameter

ATTACH – Attaches the client session to an existing export job and automatically places you in the interactive-command interface.

CONTENT – Enables you to filter what Export unloads: data only, metadata only, or both.

DIRECTORY – Specifies the default location to which Export can write the dump file set and the log file. DUMPFILE – Specifies the names, and optionally, the directory objects of dump files for an export job. ESTIMATE – Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes).

ESTIMATE_ONLY – Instructs Export to estimate the space that a job would consume, without actually performing the export operation.

EXCLUDE – Enables you to filter the metadata that is exported by specifying objects and object types to be excluded from the export operation.

FILESIZE – Specifies the maximum size of each dump file.

FULL – Specifies that you want to perform a full database mode export.

INCLUDE – Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode.

JOB_NAME – Used to identify the export job in subsequent actions.

LOGFILE – Specifies the name, and optionally, a directory, for the log file of the export job.

PARFILE – Specifies the name of an export parameter file.

QUERY – Allows you to specify a query clause that is used to filter the data that gets exported.

SCHEMAS – Specifies that you want to perform a schema-mode export.

TABLES – Specifies that you want to perform a table-mode export.

TABLESPACES – Specifies a list of tablespace names to be exported in tablespace mode. TRANSPORT_TABLESPACES — Specifies that you want to perform an export in transportable-tablespace mode. This parameter is used to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

TRANSPORTABLE — When set to ALWAYS, it indicates the transportable option should be used during the export. The default is NEVER.

 

impdp Parameters

REUSE_DATAFILES – Specifies whether the import job should reuse existing data files for tablespace creation. SQLFILE – Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.

STATUS – Specifies the frequency at which the job status display is updated.

TABLE_EXISTS_ACTION – Tells import what to do if the table it is trying to create already exists. REMAP_DATAFILE – Changes the name of the source data file to the target data file name in all SQL statements where the source data file is referenced.

REMAP_SCHEMA – Loads all objects from the source schema into a target schema.

REMAP_TABLE – Allows you to rename tables during an import operation.

REMAP_TABLESPACE – Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.

 

The following commands are applicable when using Interactive mode:

ADD_FILE – Add additional dump files.

CONTINUE_CLIENT – Exit interactive mode and enter logging mode.

EXIT_CLIENT – Stop the import or export client session, but leave the job running.

KILL_JOB – Detach all currently attached client sessions and terminate the current job.

PARALLEL – Increase or decrease the number of active worker processes for the current job.

START_JOB – Restart a stopped job to which you are attached.

STATUS – Display detailed status for the current job and/ or set status interval.

STOP_JOB – Stop the current job for later restart.

 

Network-based Data Pump operations

NETWORK_LINK parameter is used with impdp as part of import operation.  Data is moved directly using SQL.  Select statement moves data from remote database over the link and INSERT clause inserts into local database.  No dump files are involved.  Current user database links are not supported.   Only PUBLIC, FIXED user or connected user Database Links can be used.

 

Restrictions of Network Link:

  1. Tables with LONG or LONG RAW that reside in admin tablespaces are not supported.
  2. Auditing cannot be enabled on admin tablespaces when performing full transportable export.
  3. Source and Target database cannot be more than two versions off.

 

SQL*Loader

SQL*loader is a very good for loading data from non-Oracle databases, it is very flexible and can read large range of flat files.  SQL*Loader can perform the following functions:

  1. Load data across network if files on different machine.
  2. Load data from multiple files in same session.
  3. Load data into multiple tables in same session.
  4. Specify character set.
  5. Selectively load data by values.
  6. Manipulate data before loading.
  7. Generate unique sequential keys.
  8. Use OS to access data files.
  9. Load data from disk, tape or named pipe.
  10. Load arbitrarily complex object-related data.
  11. Use secondary data files for loading LOBs and collections.
  12. Use convential, direct path or external tables loads.

 

SQL*Loader can be used with or without control file

SQL*Loader makes use of the following files:

Control – Defines data format and control behaviors.

Data – One or more data files containing info to be loaded.

Log – Logs performance and errors.

Bad – Contains all records not loaded because of errors.

Discard – Contains all records discarded by control file identified bypass.

 

SQLLDR executable is used to invoke SQL*Loader.  you use parameters in the control its action in a PARFILE or as options.  If parameter appears in PARFILE and command line, command line is used.

 

Three sections of control files: PARFILE.

  1. Session wide info: global options, input data file location, data to be loaded.
  2. Contains INTO TABLE blocks (one or more)
  3. Optional third section can contain input data.

Two methods SQL*Loader can use are Conventional Path and Direct Load options.

Conventional Path – SQL*Loader creates INSERT statement for data to be loaded and passes it to SQL Parser.

Direct Load Path – SQL*Loader bypasses the SQL and parser and loads data straight into the target table.

 

Direct Path is much faster than Conventional path, but conventional path is more flexible.

 

Direct Path restrictions:

  1. Cannot run concurrently with other transactions against a target table.
  2. Triggers on table do not fire.
  3. Data is written above high water mark, even if space is available below.
  4. Clustered tables are not supported.
  5. Foreign Key constraints are disabled during load.

 

Larry Catt

OCP

Oracle 12c – Unified Audit Data Trail features

In 12c the Unified Audit Trail consolidates audit information from multiple sources and makes this information available in a standard format in the UNIFIED_AUDIT_TRAIL dictionary view.  This article describes this new feature and gives a brief example of enabling it.

 

Unified Audit Data Trail features.

 

  1. The Unified Audit Trail is stored in a read-only table in the AUDSYS schema in SYSAUX tablespace
  2. Data is available to SYS and users with AUDIT_ADMIN and AUDIT_VIEWER roles.
  3. AUDIT_ADMIN can view data and create audit policies.
  4. AUDIT_VIEWER can only view data.
  5. The Unified Audit Data Trail combines the audit data from:
  • Fine-grained audit records from DBMS_FGA
  • Oracle Database Real Application Security
  • Oracle Recovery Manager
  • Oracle Database Vault
  • Oracle Label Security
  • Oracle Data Mining
  • Oracle Data Pump
  • Oracle SQL* Loader Direct Load
  1. Once configured the unified auditing is always enabled and does not depend on initialization parameters.
  2. If the database is open in READ-ONLY mode, audit records are written to new OS files in $ORACLE_BASE/audit/$ORACLE_SID directory.
  3. The V$OPTION view can be queried to determine if DB is using unified auditing.

Select value from v$option where parameter=’Unified Auditing’;

Parameter                          Value

—————–                    —————–

Unified Auditing              TRUE

 

  1. New 12c database mixed mode auditing is enabled by default by predefined policy
  2. Mixed-mode auditing enables both Traditional pre-12c auditing and unified auditing.
  3. Traditional auditing is controlled by AUDIT_TRAIL init parameters.
  4. When AUDIT_TRAIL init parameter is parameter other than NONE, the traditional audit trail will be populated as well as the Unified Audit trail.
  5. Auditing can be set to individual PDBs or CDB as a whole.
  6. In Multi-tenant DB each PDB and root CDB has its own audit trail.
  7. When upgrading to 12c, you have to manually migrate to Unified Auditing to enable.
  8. Once unified auditing is enabled, traditional is disabled.
  9. To start using Unified, at least one unified policy must be enabled.
  10. To stop using Unified, disable all unified policies.
  11. Predefined policy ORA_SECURECONFIG is initially enabled on all new 12c databases.

 

 

Larry Catt

OCP

Local Users in Oracle Multitenant Databases

In an Oracle 12c multitenant database, Local users are accounts created at the PDB level and they have no rights at the CDB level.   This article demonstrate the use of Local user accounts.   NOTE:  To create a local user you must be connected to that PDB and user is only valid in that PDB.   You can have same user name in multiple PDBs in the same

 

 

  1. Logon to your server as oracle software owner.

 

[larry@linux2 ~]$ su – oracle

Password:

Last login: Wed Oct 26 10:07:02 EDT 2016 on pts/2

[oracle@linux2 ~]$

 

  1. Logon to your CDB as an administrator, in this example we logon as CDB admin account c##cdb_dba.

 

SQL> connect c##cdb_dba/password as sysdba

Connected.

SQL>

 

  1. Open the PDB pdb1 and pdb2 with the alter pluggable database command.

 

SQL> alter pluggable database all open;

 

Pluggable database altered.

 

SQL>

 

  1. Connect to the PDB pdb1 to create local user in this single PDB.

 

SQL> alter session set container=pdb1;

 

Session altered.

 

SQL> create user user_pdb identified by password;

 

User created.

 

SQL>

 

  1. Grant resource, create session to user user_pdb.

 

 

SQL> grant resource to user_pdb;

 

Grant succeeded.

 

SQL> grant create session to user_pdb;

 

Grant succeeded.

 

SQL>

 

  1. Connect to pdb1 as the user user_pdb.

 

SQL> connect user_pdb/password@pdb1

Connected.

SQL>

 

  1. Currently user user_pdb is only a local user for the PDB pdb1 not for the CDB CDB$ROOT or the PDB pdb2. This can be show by attempting to connect to both will fail.

 

SQL> connect user_pdb/password@pdb2

ERROR:

ORA-01017: invalid username/password; logon denied

 

 

Warning: You are no longer connected to ORACLE.

SQL> connect user_pdb2/password@cdb1

ERROR:

ORA-01017: invalid username/password; logon denied

 

 

SQL>  connect user_pdb/password@pdb1

Connected.

SQL>

 

 

  1. As an admin user connect to PDB pdb2 and attempt to connect as the user ‘user_pdb’ NOTE:  That this fails because no user ‘user_pdb’ exists in this PDB.

 

SQL> alter session set container=pdb2;

 

Session altered.

 

SQL> show user

USER is “C##CDB_DBA”

SQL> connect user_pdb/password

ERROR:

ORA-01017: invalid username/password; logon denied

 

 

Warning: You are no longer connected to ORACLE.

SQL>

 

  1. Reconnect as the admin user and create a user ‘user_pdb’ in PDB pdb2

 

SQL> connect C##CDB_DBA/password

Connected.

SQL> show con_name

 

CON_NAME

——————————

CDB$ROOT

SQL> alter session set container=pdb2;

 

Session altered.

 

SQL> show con_name

 

CON_NAME

——————————

PDB2

SQL> create user user_pdb identified by password;

 

User created.

 

SQL> grant dba to user_pdb;

 

Grant succeeded.

 

SQL> grant create session to user_pdb;

 

Grant succeeded.

 

SQL> connect user_pdb/password@pdb2;

Connected.

SQL>

 

 

 

This completes local user creation in PDBs.

 

 

Larry Catt

OCP

 

 

Basics of Multitenant Container Database (CDB)

The IT industry has increasing developed server technology with increased levels of memory and processing power which have out grown traditional system setups of one application to one server.  Most modern physical servers are very capable of handling multiple applications and even multiple server platforms through Virtualization technologies.   This is just as true with RDBMS suites.   Oracle has developed Multitenant architecture databases to take advantage of excess server power and reduce the physical hardware foot print.   This technology is known as CDB – Container Databases and PDB – Pluggable Databases.

Container Databases (CDB) act as the container where you store all of your application and system databases.  It contains and manages all of the initialization parameters, memory resources, processor resource and shared services granted to the RDBMS by the OS and created through the RDBMS.   Container Databases should not be used by application data stores to process user requests or perform any action for an outside application.   It should only be used to house PDB data stores which do provide data storage and manipulations of applications.  CDB’s can hold one to many PDBs.

Pluggable Databases (PDB) act as the application data stores and perform all actions that a traditional Oracle database perform through the use of resources granted by the CDB.   It allows us to reduce the overhead of managing multiple Oracle RDBMS Suites and increase the utilization of current hardware resources.  PDB’s are housed in a single CDB and can be moved from one CDB to another CDB.

Larry Catt

OCP