Category Archives: Performance

Oracle 12c – Monitor database alerts

Alert Logs and Trace Files:

All Oracle background process have their own trace files where information about errors are dumped to when detected.  Alert logs also contain information about database errors, with less detail and normally point towards specific trace file.  Message in alert log include:

  1. Init parameter with non-default values or other notations.
  2. all errors: internal ORA-600, block corruption ORA-1578 and deadlock ORA-60
  3. Admin DDL CREATE, ALTER, and DROP and startup and shutdown.

Alert logs are maintained in both XML and plain text.   The ADRCI utility is used to view XML alert logs.  Both Alert and trace files are stored under the Automatic Diagnostic Repository directory structure.   The init parameter MAX_DUMP_FILE_SIZE limits the size of trace files to a set number of OS blocks.   You cannot limit size of alert logs, but it is a good practice to periodically rename the alert file to reduce file size oracle is using.  The database instance will automatically create a new alert log with appropriate name and this renaming process can be done with the database online.

 

Statistics:

Statistics are generated by the Oracle 12c database to help increase level of performance and to all the optimizer to decide on the best execution plans of SQL statements.   If the SQL_TRACE init parameter is set to TRUE, performance statistics will be generated for SQL statements.  SQL tracing can be set at session level by ‘ALTER SESSION SET SQL_TRACE’ command.   Trace files are written to Automatic Diagnostic Repository directory structure.  The DBMS_SESSION and DBMS_MONITOR packages can be used to control SQL tracing for a session.

 

Adaptive Thresholds:

You can monitor database continuously with adaptive thresholds.  By setting warning and critical alerts for system metrics.  This can be done using moving window metrics.  Two types of adaptive thresholds:

Percentage of maximum – Defined as the max of data/resource in moving window.   like 80% of observed.

Significance level – Defined to a unusual value of threshold, set to one of the following values

  1. High (.95)
  2. Very High (.99)
  3. Severe (.999)
  4. Extreme (.9999)

 

 

Larry Catt

OCP

Oracle 12c – Explain ADR Enhancements

Oracle 12c Automatic Diagnostic Repository (ADR) provides files to store diagnostic information about the health of the database.   This release has added two new log files to add in administration of the RDBMS suite:  DDL Log File and Debug Log File.

 

NEW DDL Log file – The new DDL log file has the same format and behavior as the alert log.  When the ENABLE_DDL_LOGGING parameter is set to TRUE, DDL statements are written out to the DDL log file.  Two log files are maintained:  XML and plain text.  Log files are stored in the /log/ddl directory of the ADR home and are included in IPS incident packages.

 

Debug Log – the debug log records unusual incidents that do not impact normal operations.  They are occurrences which do not warrant an entry in alert log or incident report.  The debug log has the same format as alert log.

 

 

Larry Catt

OCP

Oracle 12c – Enhanced statistics gathering features

Oracle 12c has several enhanced features for statistics gathering which add in optimization of PL/SQL execution.    This article describes these enhancements.

 

 

The Optimizer uses statistics to create the best execution plans.  Re-optimization uses the execution of previous statements results to determine if the executed plan was optimal and adjusts plans for future execution in an iterative process.

Three types of re-optimization:

  1. Statistics Feedback – also known as cardinality feedback, it compares cardinality of estimated and actual executions. Storing the correct cardinality statistics and creates SQL plane directive for future use.
  2. Performance Feedback – Used to improve degree of parallelism for future SQL statements based on previous executions when Automatic Degree of Parallelism (AutoDOP) is enabled in adaptive mode.
  3. SQL plan directives – Is additional information used by optimizer to determine correct execution plans and are created on query expressions not SQL statements thus can be applied to multiple SQL statements. Multiple directives can be used for single SQL statement.  SQL directives are purged if not used in 53 days.  You can see directives in views:   DBA_SQL_PLAN_DIRECTIVES and DBA_SQL_PLAN_DIR_OBJECTS.

 

 

 

Partition table statistics – Two sets of statistics are gathered at the partition level and table level, in 12c the table level statistics are aggregated by partition level avoiding full table scan.

 

INCREMENTAL_STALENESS – Preference setting in DBMS_STATS.SET_TABLE_PREFS allows you to determine when statistics on tables are stale by percentage of change with  USE_STALE_PERCENT or locking statistics forcing their use regardless of how much change has occurred with USE_LOCKED_STATS.

 

Incremental statistics improved by EXCHANGE_PARTITION command allows you to exchange statistics from non-partitioned table with those in a partition.  Prior to 12c you had to gather data on all partitions before using incremental global statistics.   Use the DBMS_STATS package preference of INCREMENTAL_LEVEL set to TABLE (the default is PARTITION) to use this EXCHANGE_PARTITION feature.

 

Concurrent Statistics – when global statistics gathering preference CONCURRENT is set, the Job Scheduler and Advance queuing gather states on table/partitions concurrently.  12c has optimized automatic statistics gathering by batching smaller jobs to run together.  When preference CONCURRENT is set to ALL or AUTOMATIC stats a gathered nightly.

 

 

Automatic column group detection – Gathering of statistics on columns used in filter predicates by estimating cardinality.   Determining columns to gather increased stats is determined by workload history.  Three step process determines Auto Column Group detection:

  1. Seed column usage – workload history via the DBMS_STATS.SEED_COL_USAGE procedure.
  2. Create the column groups – Use the DBMS_STATS.CREATE_EXTENDED_STATS function to id columns groups based on usage information gathered. Once identified extended statistics will be automatically generated.
  3. Regather statistics – Regather statistics for all tables identified for column stats extended.

 

 

Larry Catt

OCP

Oracle 12c – Analyze and identify performance issues

The Oracle Automatic Database Diagnostic Monitor (ADDM) is a utility which automatically detects and reports performance issues.  ADDM is based on snapshots take by AWR (Automatic Workload Repository). This data is analyzed and displayed as ADDM findings on Database Home Page of EM.   The DBMS_ADDM package can execute ADDM reports outside of EM but the user executing DBMS_ADDM must have ADVISOR privilege.  ADDM allows administrators to quickly identify performance problems which may not be readily noticeable.  Every ADDM finding will have one or more recommendations to reduce the impact.

 

AWR takes snapshots are taken once an hour and stored for 8 days by default.  It is recommended that snapshots are stored for 30 days.  A new snapshot will result in an ADDM analysis being performed with the following steps:

 

  1. Locates the root cause of performance issue.
  2. Provides recommendation to correct.
  3. Quantifies the expected benefits.
  4. Identifies areas where no action is needed.

 

ADDM analysis is done top down through DB time statistics this is an iterative process and once one problem is solved another will become the bottleneck.  ADDM recommendation may include:

 

  1. Hardware changes – add CPU, memory, change I/O subsystem.
  2. Database configuration – change init parameters.
  3. Schema changes – could include hash partitioning table or index or using ASSM automatic segment space management.
  4. Application changes – cache options or use of bind variables.
  5. Using other advisors – Running SQL Tuning Advisor or Segment Advisor.

 

ADDM is enabled in database by default and controlled by init parameters CONTROL_MANAGEMENT_PACK_ACCESS.    ADDM is enabled when CONTROL_MANAGEMENT_PACK_ACCESS parameter is set to DIAGNOSTIC+TUNING (default) or DIAGNOSTIC.    ADDM is disabled when CONTROL_MANAGEMENT_PACK_ACCESS is setting to NONE.

Additionally the initialization parameter of STATISTICS_LEVEL must be set to TYPICAL (default) or ALL to enable ADDM, but if it is set to BASIC ADDM will be disable.

 

The parameter DBIO_EXPECTED of the DBMS_ADVISOR package is required to perform analysis of I/O performance. It is set to 7000 microseconds with the following package call:

 

EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER(‘ ADDM’, ‘DBIO_EXPECTED’, 7000);

 

An ADDM analysis results come in one of three classes:

  • PROBLEM – root cause of database performance issues.
  • SYMPTOM – finding that’s info leads to one or more problem findings.
  • INFORMATION – Findings that are used to report areas of system that do not have performance issues.

 

If one problem has multiple findings, ADDM may report them separately.   Problem will be shown largest impact first.

 

Larry Catt

OCP

Oracle 12c – Manage VLDB

Very Large Databases use partitioning to improve the ability to support large data sets.  Breaking large tables into smaller manageable chunks.  This allows for partition pruning, which prevents access to partitions which will not have data queried.  Partitioning and indexing work together to provide parallel processing to search multiple partitions at same time as in data ware housing.

 

Partitioning Methods: 

You can partition data into a single level or two level for a composite partition.  How the data is accessed determines the best implementation method.   Regardless of partitioning method, you have three ways to control distributing of partitions:

  1. Range
  2. Hash
  3. List

 

Single Level Partitioning

One or more of the columns acts as the partition key spitting the table into multiple sections.

  1. Range Partitioning – maps data to a range of values in the partition key. Each row is identified with a mapping to partition key.   This is the most common partitioning method and is commonly used with dates.  Partition is created with a VALUES LESS THEN clause which show the upper bound of a date column.  The highest partition can use a MAXVALUE to provide an upper limit.
  2. Hash Partitioning – This method uses a hashing algorithm to map data to partitioning key. Used to evenly spread data among partitions. It is easier to implement then range.  Good to use if no date column exists and no obvious partitioning key exists.
  3. List Partitioning – Allows you to provide a list of values for partitioning key column and divides data base on list. You control exactly how rows are mapped to partitions.  You list a set of values to be placed in separate partitions and use the DEFAULT clause to specify a partition for any value not in the list.

 

Composite Partitions

This combines the basic partitioning methods into multiple sub-partitions of the same method or different methods.  Provides for a finer level of sub-division of data.   Possible composite partitions are:

  • Range-Range
  • Range-Hash
  • Range-List
  • List-Range
  • List-Hash
  • List-List
  • Hash-Hash

 

 

Larry Catt

OCP

Oracle 12c – Manage memory

Oracle 12c provides a single parameter allowing the database instance to automatically manage all memory pools called MEMORY_TARGET.  When MEMORY_TARGET is set to a given value, oracle process grabs that much memory and allocates it to the various memory pools.  MEMORY_TARGET parameter is dynamic and can be adjusted with the database online.   MEMORY_MAX_TARGET parameter is an optional method to specify the maximum amount of memory for oracle to use.   MEMORY_MAX_TARGET is not dynamic and cannot be changed without restarting database.  MEMORY_MAX_TARGET serves as an upper limit to MEMORY_TARGET so the parameter is not set too high.

 

AMM (Automatic memory management) is the database instance configuration which manages and distributes memory allocations between SGA and PGA automatically and redistributes based on changing requirements.   Databases created by DBCA will have AMM enabled by default.

Automatic Shared Memory Management ASMM was the version before Oracle 12c and AMM.  In ASMM the total amount of memory for SGA is specified by SGA_TARGET parameter, oracle will automatically distribute the memory specified by SGA_TARGET to the pools:

LOG_BUFFER, DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE, and DB_nK_CACHE_SIZE.

Any memory allocated by parameters to above 4 pools will be subtracted from SGA_TARGET allocation and the rest is distributed amongst: SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE, DB_CACHE_SIZE, STREAMS_POOL_SIZE.   BOTH ASMM and AMM use database statistics and cannot be used if the parameter STATISTICS_LEVEL is set to BASIC.

 

 

Setup AMM by changing the following initialization parameters:

 

ALTER SYSTEM SET  MEMORY_TARGET = nM;

ALTER SYSTEM SET  SGA_TARGET = 0;

ALTER SYSTEM SET  PGA_AGGREGATE_TARGET = 0;

 

Optionally, you can also set the MEMORY_MAX_TARGET parameter:

 

ALTER SYSTEM SET MEMORY_MAX_TARGET = nM scope = SPFILE;

Restart the database.

 

 

Oracle Initialization Parameters concerning Memory Management:

 

MEMORY_TARGET – When set the Oracle grabs the memory at startup and gives 60% to SGA and 40% to PGA by default.   It redistribute memory as needed.   IF not set AMM is not in use, even if a value is set for MEMORY_MAX_TARGET.

 

MEMORY_MAX_TARGET – Sets the maximum amount of memory that Oracle will grab.   If not set, it default to MEMORY_TARGET.

 

SGA_TARGET  – Parameter is not needed if using AMM, if it is defined the value becomes the minimum amount of memory allocated by AMM.

 

PGA_AGGREGATE_TARGET – Parameter is not needed if using AMM, if it is defined the value becomes the minimum amount of memory allocated by AMM.

 

 

Larry Catt

OCP

Oracle 12c – Use Index enhancements

You can create multiple indexes of different types on the same set of columns now.   But only one index will be visible at any given time.   Multiple indexes can exist when one of the following is true

  1. B-tree vs bitmap
  2. Different partitioning sets exist.
  3. Unique vs non-unique.

 

 

Example: Creating two indexes on single column:

 

  1. Logon to SQLPLUS.

 

[oracle@linux2 ~]$ sqlplus scott/password

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 29 13:47:09 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jan 27 2016 09:57:58 -05:00

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

With the Partitioning, OLAP, Advanced Analytics, Real Application Testing

and Unified Auditing options

SQL>

 

  1. Create bitmap index on table scott.emp(job).

 

SQL> create bitmap index bm_scott_emp_job on scott.emp(job);

Index created.

SQL>

 

  1. View the new index from user_constraints.

 

SQL> column index_name format a20

SQL> column index_type format a10

SQL> select index_name, index_type, visibility from user_indexes where table_name=’EMP’;

 

INDEX_NAME           INDEX_TYPE VISIBILIT

——————– ———- ———

BM_SCOTT_EMP_JOB     BITMAP     VISIBLE

PK_EMP               NORMAL     VISIBLE

SQL>

 

  1. Make first index invisible.

 

alter index bm_scott_emp_job invisible;

SQL> alter index bm_scott_emp_job invisible;

Index altered.

SQL>

 

 

  1. Now create second index on table scott.emp(job) but this one a b-tree.

 

create index bt_scott_emp_job on scott.emp(job);

SQL> create index bt_scott_emp_job on scott.emp(job);

Index created.

SQL>

 

  1. NOTE: the first index must be invisible before you can create the second index.   Now see the indexes on table EMP.

 

 

SQL> column index_name format a20

column index_type format a10

select index_name, index_type, visibility from user_indexes where table_name=’EMP’;SQL> SQL>

 

INDEX_NAME           INDEX_TYPE VISIBILIT

——————– ———- ———

BM_SCOTT_EMP_JOB     BITMAP     INVISIBLE

BT_SCOTT_EMP_JOB     NORMAL     VISIBLE

PK_EMP               NORMAL     VISIBLE

 

SQL>

 

  1. This completes index enhancements in Oracle 12c

 

 

Larry Catt

OCP

 

Oracle 12c – Use Flash Cache

The Database Smart Flash Cache allows an instance to access multiple flash devices without requiring a volume manager.  This feature only works on Solaris and Linux OS.

 

Enabling Smart Flash Cache is a benefit if:

  1. Buffer Pool Advisor of AWR or STATSPACK says to double size of buffer cache.
  2. DB FILE SEQUENTIAL READ is top wait event.
  3. System has spare CPU cycles.

 

Two initialization parameters used to configure Database Smart Flash Cache.

DB_FLASH_CACHE_FILE – list of Paths and files for Database Smart Flash Cache which can be ASM or OS files but must be on a flash device.  If not specified, oracle will create on startup.  Support max of 16 files.

DB_FLASH_CACHE_SIZE – Specifies size of each file, if number of sizes does not match number of files, an error will occur.  Size is specified by nG (ie 10G).  You disable FLASH device by specifying zero for size.  But this is not a dynamic change and restart is required.

 

V$FLASHFILESTAT view can determine latency and read counts for each file.

 

 

Larry Catt

OCP

Oracle 12c – Difference between ILM and Valid-Time Temporal

Oracle 12c has both ILM and Temporal Validation which can be confused in their use.   The definitions of each are below along with their use:

 

  • ILM is used to improve the utilization of storage devices through compression and access tiering of data. Data accessed the least is compressed and placed on slow devices, while data accessed frequently is not compressed and stored on faster devices.
  • Temporal Validity allows you define a valid time dimension for each row of a table.

Larry Catt

OCP

Oracle 12c – Diagnose performance issues using ASH enhancements

Conventional ADDM uses AWR which are based on reports taken during intervals.  Some changes will not appear on this conventional ADDM because changes where not great enough to trigger new AWR analysis.  This is also not a real-time comparison of performance.   ASH Active Session History is real-time and had the following characteristics:

 

  1. Active Session History is a collection of real-time data stored in the SGA based on active session
  2. Can be viewed via the V$ACTIVE_SESSION_HISTORY view.
  3. EM Cloud has the ability to run ASH reports from:
    1. HOME PAGE à Performance Menu/Home à Logon with admin privilege à Under Average Active Sessions click ASH Reports à Enter Start and Stop time of report à click Generate.
  4. Reports are placed under Run ASH Report Page.
  5. ASH Reports are divided into the sections
    1. Top Events – top wait events
    2. Load Profile – Describes load during report period.
    3. Top SQL – order by amount of resource consumed by SQL statements.
    4. Top Sessions – order by sessions waiting for wait events.
    5. Top DB Objects/Files/Latches – order by top objects being used/accessed.
    6. Activity Over Time – Shows picks and values in performance over reporting period.
  6. EM Cloud also includes new ASH Analytics page, providing graphical view of ASH data.

 

Larry Catt

OCP