Enhanced SQL Trace Utility From Oracle: Oracle Tips by Burleson Consulting
Enhanced SQL Trace Utility From Oracle: Oracle Tips by Burleson Consulting
Oracle Tips by Burleson Consulting Here is a great new script from Carlos Sierra, a brilliant developer at Oracle Corporation. This script is an enhancement to the Center of Excellence (COE) script top produce a super-detailed trace of SQL execution. This new script is remarkable, and I hope that you enjoy it as much as I do. The MOSC page is reproduced here with Mr. Sierra's permission: I have reproduced it below for your convenience (see below). Carlos' script enhances standard execution plan analysis by providing:
1. Enhanced Explain Plan (including execution order, indexed columns, rows count and blocks for tables); 2. Schema Object Attributes for all Tables and Indexes accessed by the SQL statement being diagnosed
(including: object dependencies, tables, indexes, columns, partitions, sub-partitions, synonyms, policies, triggers and constraints);
3. CBO Statistics (for levels: table, index, partition, sub-partition and column); 4. Histograms (includes table, partition and sub-partition levels); 5. Space utilization and administration (for tables, indexes, partitions and sub-partitions); 6. Objects, Segments, Extents, Tablespaces and Datafiles (including current performance of datafiles); 7. RDBMS Initialization Parameters INIT.ORA (required, recommended and default for an APPS 11i
database, and all other parameters SET on the INIT.ORA file);
8. Source Code in which the SQL Statement and accessed Tables depend on (Triggers description and body,
Views columns and text, Packages specs and bodies, Procedures and Functions). SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement (8.1.5-9.2.0) SQLTXPLAIN.SQL is one SQL*Plus script that using a small staging repository and a PL/SQL package creates a comprehensive report to gather relevant information on ONE SQL statement (sql.txt). COE_XPLAIN.SQL performs the same functionality, with some limitations and restrictions. SQLTXPLAIN.SQL differs from COE_XPLAIN.SQL in the following ways:
1. SQLTXPLAIN.SQL collects more data about the objects in which the SQL Statement <sql.txt> depends
on. It uses V$OBJECT_DEPENDENCY to find these dependencies.
2. SQLTXPLAIN.SQL can be used by multiple users concurrently. It keeps all staging data organized by
unique STATEMENT_ID, therefore it can handle concurrency and historical data.
3. SQLTXPLAIN.SQL creates a better organized and documented report output. Report sections that are not
needed on a specific SQL Statement <sql.txt> are just skipped over in the new report without any headers or references.
4. SQLTXPLAIN.SQL allows to keep multiple versions of CBO Stats into the same table
SQLT$STATTAB. Therefore, similar sets of CBO Stats can be restored into the Data Dictionary multiple
5. SQLTXPLAIN.SQL is subject to future improvements and additions, while COE_XPLAIN.SQL is not. 6. SQLTXPLAIN.SQL performs better than COE_XPLAIN.SQL for same SQL Statement <sql.txt>. 7. SQLTXPLAIN.SQL reports sup-partitions details. 8. SQLTXPLAIN.SQL reports actual LOW and HIGH 'boiled' values of all columns on tables being
accessed. It also reports Histograms in a more comprehensive format.
9. SQLTXPLAIN.SQL does not report some data shown on COE_XPLAIN.SQL that was actually not used
during SQL Tuning exercises, making the new report easier to understand.
10. COE_XPLAIN.SQL evolved during 2 years, while SQLTXPLAIN.SQL was designed from scratch.
It is important to note that the SQL Profile does not freeze the execution plan of a SQL statement, as done by stored outlines. As tables grow or indexes are created or dropped, the execution plan can change with the same SQL Profile. The information stored in it continues to be relevant even as the data distribution or access path of the corresponding statement change. However, over a long period of time, its content can become outdated and would have to be regenerated. This can be done by running Automatic SQL Tuning again on the same statement to regenerate the SQL Profile. Heres the set of SQL statement you can use to trace the execution time of ACTIVE running SQL query that you wish to tune. -- Get The SQL_ID From Active Session SQL SELECT
b.sid, b.event,
b.serial#, b.action,
a.spid,
b.sql_id, b.p2text,
b.program, b.p3text,
b.osuser, b.state,
sql_id,
child_number,
plan_hash_value
plan_hash,
executions
execs,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions)
-- Append the /* + gather_table_statistics */ hint into SQL Statement and execute SELECT /* + gather_table_statistics */ sysdate ... (SQL Statement)
subsequently provide the detail plan execution TIME IN the NEXT query below.
-- Get The Detail Explain Plan Using SQL ID
SELECT
plan_table_output
FROM
sid,
serial#,
opname,
target,
sofar,
totalwork,
units,
elapsed_seconds "ELAPSED SEC", round(elapsed_seconds/60,2) "ELAPSED MINS", round((time_remaining+elapsed_seconds)/60,2)"TOTAL MINS", message TIME
FROM v$session_longops WHERE sofar<>totalwork AND time_remaining <> '0'
SQL> DBMS_SQLTUNE.IMPORT_SQL_PROFILE(sql_text => 'FULL QUERY TEXT', profile => sqlprof_attr('HINT SPECIFICATION WITH FULL OBJECT ALIASES'), name => 'PROFILE NAME', force_match => TRUE/FALSE);
FULL QUERY TEXT The value can be obtained from the the SQL_FULLTEXT column in table GV$SQLAREA.
xmltable('/*/outline_data/hint' passing (
SELECT
xmltype(other_xml) AS xmlval
FROM
dba_hist_sql_plan
WHERE sql_id = '4n01r8z5hgfru'
AND plan_hash_value = '82930460' AND other_xml IS NOT NULL ) ) d; You can also generate the Trace File 10053 and look for the hint specification between BEGIN_OUTLINE_DATA and END_OUTLINE_DATA. Download SQLTXPLAIN.sql from Oracle Metalink and run it to get Trace File 10053. /*+
BEGIN_OUTLINE_DATA IGNORE_OPTIM_EMBEDDED_HINTS OPTIMIZER_FEATURES_ENABLE('10.2.0.3') OPT_PARAM('_b_tree_bitmap_plans' 'false') OPT_PARAM('_fast_full_scan_enabled' 'false') ALL_ROWS OUTLINE_LEAF(@"SEL$335DD26A") MERGE(@"SEL$3") OUTLINE_LEAF(@"SEL$7286615E") MERGE(@"SEL$5") OUTLINE_LEAF(@"SEL$1") ...... END_OUTLINE_DATA
*/ FORCE_MATCH is really the main reason for using SQL Profiles, when set to TRUE it will ignore literals with exact queries and implement the profile on them (just like what cursor_sharing=force does to the entire DB). For Example: When force match is set to TRUE, a.segment1 = 1234 will become a.segment1 = :b1 To create a SQL Profile a user must have the following: ADVISOR role, create any sql profile privilege, alter any sql profile privilege, drop any sql profile privilege and execute priviliege on DBMS_SQLTUNE.
SQL> GRANT EXECUTE ON SYS.DBMS_SQLTUNE TO <user>; SQL> GRANT ADVISOR TO <user>; SQL> GRANT CREATE ANY SQL PROFILE TO <user>; SQL> GRANT ALTER ANY SQL PROFILE TO <user>; SQL> GRANT DROP ANY SQL PROFILE TO <user>;
List of example to create Oracle SQL Profile: -
SQL> DBMS_SQLTUNE.IMPORT_SQL_PROFILE
(
cl_sql_text CLOB;
BEGIN SELECT
sql_fulltext
INTO
cl_sql_text
FROM
gv$sqlarea
WHERE
sql_id='4n01r8z5hgfru'; DBMS_SQLTUNE.IMPORT_SQL_PROFILE( sql_text => cl_sql_text, profile => sqlprof_attr('HINT SPECIFICATION WITH FULL OBJECT ALIASES'), name => 'PROFILE NAME', force_match => TRUE);
END; / DECLARE
sql_fulltext
INTO
cl_sql_text
FROM
gv$sqlarea
WHERE
sql_id='gtwyx63711jp1';
SELECT
hint_spec
FROM
xmltable('/*/outline_data/hint' passing (
SELECT
xmltype(other_xml) AS xmlval
FROM
dba_hist_sql_plan
WHERE
sql_id = 'gtwyx63711jp1'
AND plan_hash_value = '82930460' AND other_xml IS NOT NULL ) ) d;
DBMS_SQLTUNE.IMPORT_SQL_PROFILE( sql_text => cl_sql_text, profile => hint_spec, name => 'PROFILE NAME', force_match => TRUE);
END; / Note: You may use the table v$sql_plan if there is no outline hints available in dba_hist_sql_plan. Once you have finish creating the Oracle SQL Profile, check on the database system for the new SQL Profile.
SQL> SELECT name, created FROM dba_sql_profiles ORDER BY created DESC; SQL> SELECT sql_attr.attr_val outline_hints
FROM dba_sql_profiles sql_profiles, sys.SQLPROF$ATTR sql_attr WHERE sql_profiles.signature = sql_attr.signature AND sql_profiles.name = 'PROFILE NAME' ORDER BY sql_attr.attr# ASC;
In this example, my_sql_profile is the name of the SQL Profile you want to drop. You can also specify whether to ignore errors raised if the name does not exist. For this example, the default value of FALSE is accepted.
You need two similar systems: SOURCE and TARGET. SQLT must be installed in both. SOURCE and TARGET must have the same schema objects (ie. PROD, TEST, DEV, QA, etc.). Required files are generated in SOURCE when SQLT is executed. Steps: 1. Import into staging table in TARGET the CBO Stats generated by SQLT in SOURCE, connecting as SQLTXPLAIN.
UNIX>
imp
SQLTXPLAIN/<pwd>
tables='sqlt$_stattab'
file=sqlt_s3407.dmp
ignore=y 2. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user.
UNIX>
imp
SQLTXPLAIN/<pwd>
tables='sqlt$_stattab'
file=sqlt_s3407.dmp
ignore=y 3. Restore the CBO Stats from staging table into the data dictionary, connecting as SQLTXPLAIN, SYSTEM, SYSDBA or the application user (for example TC3407):
Instructions below apply when schema objects were consolidated into on TC user TC3407. If method used in SQLT Test Case was XPLAIN, you will need to modify script with one SQL so it can be executed stand alone in step 4 (you may need to replace binds). Steps: 1. Export CBO Stats captured automatically during step 5 of SQLT Test Case, connecting as TC3407.
UNIX> imp TC3407/TC3407 TABLES=CBO_STAT_TAB_4TC file=STATTAB.dmp IGNORE=y UNIX> sqlplus TC3407/TC3407 SQL> EXEC DBMS_STATS.IMPORT_SCHEMA_STATS('TC3407', 'CBO_STAT_TAB_4TC');
-- set cbo environment and generate 10053
UNIX> sqlplus TC3407/TC3407 SQL> START sqlt_s3407_prd1_db_setenv.SQL; SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = "TC3407_10053"; SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1'; SQL> DEF unique_id = TC3407
o CBO Stats dump file FROM step 1: o Instructions file FROM step 2: o o o Metadata CBO Script
SET
script Environment
WITH
SQLT
method)
SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_HEADERS"', percent => 100, cascade => TRUE); SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_LINES"', percent => 10, cascade => TRUE); SQL> EXEC FND_STATS.GATHER_TABLE_STATS(ownname => '"GL"', tabname => '"GL_JE_SOURCES_TL"', percent => 100, cascade => TRUE); TRANSFER A STORED OUTLINE
If your SQL uses an Stored Outline, you can export the SO from SOURCE
and import it into TARGET. Steps: 1. Export Stored Outline from SOURCE connecting as OUTLN, SYSTEM or SYSDBA.
statistics=none
query=\"WHERE
ol_name
\'<stored_outline_name>\'\"
log=sqlt_exp_outln.log 2. Import Stored Outline into TARGET connecting as OUTLN, SYSTEM or SYSDBA.
UNIX>
Notes:
imp
system/<pwd>
file=sqlt_s3407_outln.dmp
fromuser=outln
touser=outln IGNORE=y
1. If TARGET already contains a Stored Outline for your SQL, find its name and drop it before import step. Connect as OUTLN, SYSTEM or SYSDBA to drop an outline.
SQL>
EXEC
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF
(TABLE_NAME
=>
'STGTAB_SQLPROF',
SQL>
EXEC
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF
(profile_name
=>
'<sql_profile_name>',
UNIX>
imp
<usr>/<pwd>
tables=stgtab_sqlprof
file=sqlprof.dmp
ignore=y
SQL>
EXEC
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF
=> 'DEFAULT', REPLACE => TRUE,
(profile_name
=> =>
'<sql_profile_name>',
profile_category
'STGTAB_SQLPROF',
staging_table_name
Notes: 1. table. Connect with same user in both SOURCE and TARGET. 2. User must have CREATE ANY SQL PROFILE privilege and the SELECT privilege on staging
DBMS_OUTLN.CREATE_OUTLINE
(hash_value
=>
644832611,
child_number => 0); SQL> ALTER SESSION SET create_stored_outlines = FALSE; SQL>
Notes: 1. User must have CREATE ANY OUTLINE grant or DBA role. 2. Set your optimizer environment first (you may want to use the setenv script). SELECT * FROM
DBA_OUTLINES
WHERE
signature
'914E567776565E496F27F2C5B3C0F9D2';
EXTRACT A PLAN FROM MEMORY OR AWR AND PIN IT TO A SQL IN SAME OR DIFFERENT SYSTEM
SQLT XTRACT and XECUTE, record into SQLT repository all known plans for one SQL. Any of these plans can be extracted and then associate to that SQL in same SOURCE or similar TARGET system by using a manual custom SQL Profile. Connect as SQLTXPLAIN, SYSDBA, or the application user Steps 1. Execute sqltprofile utility in SOURCE, connecting as SQLTXPLAIN, SYSDBA, or the application user
SQL>
EXEC
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF
(TABLE_NAME
=>
'STGTAB_SQLPROF',
SQL>
EXEC
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF
(profile_name
=>
'<sql_profile_name>',
UNIX>
imp
<usr>/<pwd>
tables=stgtab_sqlprof
file=sqlprof.dmp
ignore=y
SQL> GRANT EXECUTE ON SYS.DBMS_SQLTUNE TO <user>; SQL> GRANT ADVISOR TO <user>; SQL> GRANT CREATE ANY SQL PROFILE TO <user>; SQL> GRANT ALTER ANY SQL PROFILE TO <user>; SQL> GRANT DROP ANY SQL PROFILE TO <user>;
SQL>
EXEC
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF
=> 'DEFAULT', REPLACE => TRUE,
(profile_name
=> =>
'<sql_profile_name>',
profile_category
'STGTAB_SQLPROF',
staging_table_name
Database Design:
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the data access path in mind. Also look at
proper data partitioning, data replication, aggregation tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.
Generally, the CBO can change the execution plan when you: * Change statistics of objects by doing an ANALYZE;
* Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, * * percentage
* Has the SPFILE/ INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed? * Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
* db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute contention I/O from across disks. SYS.V$BH * buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze * log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks What is the difference between DB File Sequential and Scattered Reads? Both db file sequential read and db file scattered read events signify time waited for I/O read requests to complete. Time is reported in 100s of a second for Oracle 8i releases and below, and 1000s of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.