SQL tracing


This page gives examples of how to analyse a particular SQL process in order to gain a better understanding of how Oracle is executing the process and, hopefully, identifying where any particular performance bottlenecks are occurring.

Displaying the execution plan

AUTOTRACE

AUTOTRACE is a SQL*Plus feature which automates the production of explain plans, which has been available since Oracle 7.3. It relies on you having created a role (as SYS) called PLUSTRACE. The creation of this role is done by executing the %ORACLE_HOME%/sqlplus/admin/plustrce.sql script. This role is then GRANTed to the user who wants to do the EXPLAIN PLAN. It also relies on the existence of a table called PLAN_TABLE. This table is typically created by running the %ORACLE_HOME%/rdbms/admin/utlxplan.sql script. Typically, this is created as SYS and then a public synonym is created for it, but it can just be created in the necessary schema, if so required.

Here's an example output.
SQL> SET AUTOTRACE ON
SQL> SELECT SYSDATE FROM DUAL;

SYSDATE
---------
22-SEP-03

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'DUAL'

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        387  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
This is the full output, and also runs the query.

Note, you can generate the plan and the statistics but without viewing the output of the query by specifying SET AUTOTRACE TRACEONLY. You can further filter by specifying the SET AUTOTRACE TRACEONLY STATISTICS or SET AUTOTRACE TRACEONLY EXPLAIN. Also, specifying any type of AUTOTRACE which generates statistics has to run the query, SET AUTOTRACE TRACEONLY EXPLAIN generates the plan but without running the query.

An interesting piece of information about AUTOTRACE is that setting it on with any option which gathers statistics actually generates another session, running as the current user, which is used to gather statistics about the main session, i.e.
SQL> select count(*) from v$session;

  COUNT(*)
----------
        15

1 row selected.

SQL> set autotrace on
SQL> select count(*) from v$session;

  COUNT(*)
----------
        16

1 row selected.
The only option which does not generate this extra session is SET AUTOTRACE TRACEONLY EXPLAIN, since this does not gather statistics.

This "extra" session can sometimes pose it's own SQL performance challenges, an example of the diagnosis of which (amounting to tracing the "extra" session) can be found at
http://oracletoday.blogspot.com/2007/05/performance-of-autotrace_21.html.

Of course, to turn AUTOTRACE off, use
SQL> SET AUTOTRACE OFF
For more information, see here (docs.oracle.com).

DBMS_XPLAN

DBMS_XPLAN is a mechanism for generating the explain plans, and also allows programmatic access to the output. It is available from Oracle 9.2 and above, although, as per usual, it gets extended every release. Here, I'll go into some of the more common usages.

Whereas AUTOTRACE relies on the existence of the PLUSTRACE role, DBMS_XPLAN does not, and just relies on the existence of PLAN_TABLE (plus necessary GRANTs etc.), although you can change the name of the table now (via the table_name parameter of DBMS_XPLAN.DISPLAY, and the INTO clause of EXPLAIN PLAN).

DISPLAY

DBMS_XPLAN.DISPLAY is one of the best ways of reporting against plan_table that you can use, since it gives information, not only of the plan itself, but also the actual steps in the plan which Oracle has applied the predicate(s) against.

You use the following general syntax to populate plan_table with the execution path of query :
EXPLAIN PLAN
[SET STATEMENT_ID = 'statement_id']
[INTO table_name]
FOR
  query;
Once this is done, you then query the output of the DBMS_XPLAN.DISPLAY function (passing in the statement_id if you specified one), i.e. :
SQL> EXPLAIN PLAN FOR
  2  SELECT * FROM DUAL;

Explained.

SQL> SELECT * FROM TABLE( dbms_xplan.display );

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |       |       |       |
|   1 |  TABLE ACCESS FULL   | DUAL        |       |       |       |
--------------------------------------------------------------------

Note: rule based optimization

9 rows selected.
Also, the output is not fixed, it only shows relevant information, if we had a table which required temp space, for example, it'd show that as well for each step, i.e.
SQL> CREATE TABLE t AS SELECT object_name FROM dba_objects;

Table created.

SQL> ANALYZE TABLE t COMPUTE STATISTICS;

Table analyzed.

SQL> EXPLAIN PLAN FOR
  2  SELECT object_name, count(*)
  3  FROM   t
  4  GROUP BY object_name;

Explained.

SQL> SELECT * FROM TABLE( dbms_xplan.display );

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes |TempSpc| Cost  |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             | 24239 |   520K|       |   192 |
|   1 |  SORT GROUP BY       |             | 24239 |   520K|  1736K|   192 |
|   2 |   TABLE ACCESS FULL  | T           | 38361 |   824K|       |    16 |
----------------------------------------------------------------------------

Note: cpu costing is off

10 rows selected.
The new EXPLAIN PLAN allows you to even see predicate information, and even how Oracle will manipulate the predicate terms. This particular feature is handy for seeing information such as "Why isn't my index being used?" etc.
SQL> CREATE TABLE t
  2  ( a VARCHAR2(10) );

Table created.

SQL> INSERT INTO t VALUES ('1');

1 row created.

SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM   t
  4  WHERE  a = 1;

Explained.

SQL> SELECT * FROM TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |       |       |       |
|*  1 |  TABLE ACCESS FULL   | T           |       |       |       |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
   1 - filter(TO_NUMBER("T"."A")=1)

Note: rule based optimization

14 rows selected.
It is easy to see here, therefore, that in order to fulfil our predicate of "a=1", the "a" is converted to a number, via TO_NUMBER first. This also explains why Oracle will fail if there are non-numbers in the set, i.e.
SQL> INSERT INTO t VALUES ('X');

1 row created.

SQL> SELECT *
  2  FROM   t
  3  WHERE  a = 1;
ERROR:
ORA-01722: invalid number
We should, therefore, aim to compare like with like, i.e. ensure datatypes are consistent. If predicates are involved, any conversion functions located within the filter(..) operation, need to be looked at, since it's just extra work, i.e. to "fix" the above example :
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM   T
  4  WHERE  A = '1';

Explained.

SQL> SELECT * FROM TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |       |       |       |
|*  1 |  TABLE ACCESS FULL   | T           |       |       |       |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
   1 - filter("T"."A"='1')

Note: rule based optimization

14 rows selected.
i.e. converting the a=1 in the predicate to a='1', then the TO_NUMBER is removed.

DISPLAY_CURSOR

As of 10g, Oracle added quite a few new functions to DBMS_XPLAN. One of the best is DISPLAY_CURSOR. It allows you a much greater detail of information for a given cursor, either the last one executed, or any in the v$sql_plan view by specified the SQL_ID of the relevant statement, such as the actual as well as estimated cardinalities for the relevant steps in the plan, eliminating the need to manually query v$sql, v$sql_plan, v$sql_plan_statistics, v$sql_plan_statistics_all etc.

There are a few things to consider, though. The full stats of the query will ONLY be given if the /*+ gather_plan_statistics */ hint is used, or setting statistics_level=ALL at session or system level. For a great article on STATISTICS_LEVEL, see
here.
FUNCTION DISPLAY_CURSOR RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN     DEFAULT
 CURSOR_CHILD_NO                NUMBER(38)              IN     DEFAULT
 FORMAT                         VARCHAR2                IN     DEFAULT
Here's sample output for the last SQL statement executed, which is acheived by specifying NULL for the sql_id and cursor_child_no parameters.
SQL> SELECT /*+ gather_plan_statistics */ 'x'
  2    FROM dual
  3   WHERE dummy = 'X';

'
-
x

SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(format => 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

SQL_ID  c51xuz980jfcq, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ 'x'   FROM dual  WHERE dummy = 'X'

Plan hash value: 272002086

------------------------------------------------------------------------------------

| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|*  1 |  TABLE ACCESS FULL| DUAL |      1 |      1 |      1 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DUMMY"='X')

17 rows selected.
See the extra "E-Rows" and "A-Rows" columns? As you can probably infer, "E-Rows" is the number of estimated rows that Oracle expects that step in the plan to return, whereas "A-Rows" is the actual number.

As many people who know me will know, I very much extol the virtues of the "Card" / "Rows" column in standard EXPLAIN PLAN output. Ensuring that Oracle gets a good estimation of the number of rows that a given step will process is crucial to obtaining an efficient plan. This extra information now gives us an excellent tool for tuning the SQL.

Note, however, it is important to realise what the "Starts" column means, and also how it can mean that the "A-Rows" and "E-Rows" columns can be radically different, but without there being a significant problem. The "Starts" column is the number of times that that step in the plan is "executed", and, unfortunately, the "A-Rows" column is the cumulative count over all executions, whereas the "E-Rows" is the estimate for each execution of the step.

Now, let's try getting a random SQL statement from v$sql and getting it's information. We do this by supplying the "sql_id" and "child_number" from v$sql into the parameters instead of NULL for the last executed by the current session :
SQL> select sql_id, child_number
  2  from v$sql
  3  where rownum = 1;

SQL_ID        CHILD_NUMBER
------------- ------------
1fkh93md0802n            0

SQL> SELECT *
  2  FROM TABLE(dbms_xplan.display_cursor('1fkh93md0802n', '0', 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

SQL_ID  1fkh93md0802n, child number 0
-------------------------------------
select   LOW_OPTIMAL_SIZE,           HIGH_OPTIMAL_SIZE,
OPTIMAL_EXECUTIONS,           ONEPASS_EXECUTIONS,
MULTIPASSES_EXECUTIONS,           TOTAL_EXECUTIONS    from
GV$SQL_WORKAREA_HISTOGRAM    where  INST_ID = USERENV('Instance')

Plan hash value: 2485227045

---------------------------------------
| Id  | Operation        | Name       |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

---------------------------------------
|*  1 |  FIXED TABLE FULL| X$QESMMIWH |
---------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("INST_ID"=USERENV('INSTANCE'))

Note
-----

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

   - rule based optimizer used (consider using cbo)
   - Warning: basic plan statistics not available. These are only collected when :
       * hint 'gather_plan_statistics' is used for the statement or
       * parameter 'statistics_level' is set to 'ALL', at session or system level

27 rows selected.
Note, we get the warning at the end because the statement is a) A system query using the RBO (i.e. x$qesmmiwh does not have stats), and b) the gather_plan_statistics hint was not specified.

But, this should go to show just how amazingly useful and easy it is to get the actual plan for a SQL statement which is in v$sql.

For more information on EXPLAIN PLAN, see here (docs.oracle.com).

UTLXPLS.SQL

Prior to 9.2, you utilised a script (%ORACLE_HOME%/rdbms/admin/utlxpls.sql) which queried the PLAN_TABLE and generated a report.

The basic mechanism is the same as in
DBMS_XPLAN, except that instead of querying the DBMS_XPLAN.DISPLAY function, you ran the utlxpls.sql script, which is some quite complex SQL (utilising CONNECT BYs etc) for formatting the output. Note the utlxpls.sql script still exists but it has now been re-written to query DBMS_XPLAN. This is the preferred method.

Enabling and Processing SQL Trace Information

Obtaining Trace Files

SQL trace files can be obtained in a few ways and all of the techniques described below will create trace files in the operating system directory defined by certain Oracle initialisation parameters. Prior to 12c, all trace files were placed in the folder specified by the user_dump_dest parameter. As of 12c and the use of container databases (CDBs), user_dump_dest has been
deprecated in favour of a folder derived from the diagnostic_dest parameter. The full path to the location of the trace files and additional information such as what the current trace file for the session is called can be found by executing the following SQL:
SELECT value FROM v$diag_info WHERE name = 'Diag Trace';
The name of the current session trace file can be found similarly by running the following SQL:
SELECT value FROM v$diag_info WHERE name = 'Default Trace File';
From Oracle 12.2 and onwards, additional useful SYS data dictionary views are available such as V$DIAG_TRACE_FILE and V$DIAG_TRACE_FILE_CONTENTS. These allow analysis of the trace file(s) without having to have physical access to the database server operating system. The security of these views is therefore important and should be controlled by role-based privilege mechanisms to any user other than SYS.

Note, previous versions of Oracle had differing mechanisms of enabling SQL trace for a local or remote session. Performance Analysts in the past were used to issuing statements of the following form(s):
SQL> ALTER SESSION SET SQL_TRACE=TRUE
or using DBMS_SESSION:
SQL> EXEC DBMS_SESSION.SET_SQL_TRACE(TRUE);
One of the most important things to remember is to enable TIMED_STATISTICS, which enables the timing of events in v$system_event. However, from 10g onwards, not only does TIMED_STATISTICS default to TRUE but the advent of
DBMS_MONITOR. This page will therefore only cover this latter mechanism due to the pervasiveness of Oracle 10g or above.

Setting a Trace File Identifier

The naming convention of a user trace file is generally of the form <service_name>_ora_<pid>.trc, e.g. ORALIVE_ora_12345.trc.

However it is sometimes useful to change this to more accurately identify which trace file is needed or to group a process into multiple tracefiles. This can be acheived by using DDL of the following form:
SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER="mine";
which will then create a tracefile called ORALIVE_ora_12345_mine.trc. This approach works with any mechanism for enabling SQL trace, e.g. DBMS_MONITOR, DBMS_SESSION etc.

DBMS_MONITOR

DBMS_MONITOR was a new PL/SQL package introduced in Oracle 10g and contains procedures which allow you to (amongst many other routines) trace either a specific session (the SESSION_TRACE_ENABLE procedure), trace any session which matches specific criteria (the MODULE or ACTION attributes from V$SESSION and usually set via the DBMS_APPLICATION_INFO.SET_MODULE or SET_ACTION routines) via the SERV_MOD_ACT_TRACE_ENABLE procedure or trace clients by Client Identifier (the CLIENT_INFO attribute of V$SESSION and usually set via the DBMS_APPLICATION_INFO.SET_CLIENT_INFO routine) via the CLIENT_ID_TRACE_ENABLE routine.

All of these routines allow specification of waits and / or binds as simple BOOLEAN parameters, so event 10046 analysis via ALTER SESSION SET EVENTS or DBMS_SYSTEM is largely superceded and unnecessary, i.e.
SQL> EXEC SYS.DBMS_MONITOR.SESSION_TRACE_ENABLE(session_id => :sid, serial_num => :serial#, waits=>TRUE, binds=>TRUE);
Note, not specifying the :sid and :serial# parameters enable extended SQL_TRACE for the current session:
SQL> EXEC SYS.DBMS_MONITOR.SESSION_TRACE_ENABLE(waits=>TRUE, binds=>TRUE);
When tracing any session which matches on the MODULE or ACTION parameters (usually set via the DBMS_APPLICATION_INFO built-in PL/SQL package), you use a command similar to the following. Note, the value of "service_name" is usually obtained by looking at the service_name column in V$SESSION (if tracing a particular session) OR by referencing the "service_names" initialisation parameter, i.e.
SQL> SHOW PARAMETER service_names
once the relevant service name has been identified, the following command will enable trace for any module in that service:
SQL> EXEC SYS.DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(service_name => :service, module_name => :module, waits=>TRUE, binds=>TRUE);
Tracing a session which matches on CLIENT_INFO, is done with a command similar to:
SQL> EXEC SYS.DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(client_id => :clientid, waits=>TRUE, binds=>TRUE);
All these routines have the equivalent _DISABLE routines to turn this functionality off.

Trace file analysis

In all cases, as mentioned in the first paragraph in this section, trace files are created in the operating system directory specified in the user_dump_dest initialisation parameter or derived from the diagnostic_dest parameter in 12c. In addition, from Oracle 12.2,
V$DIAG_TRACE_FILE and V$DIAG_TRACE_FILE_CONTENTS can also be queried (subject to permissions) to obtain both the trace file details and contents.

This file can either be tkprof'ed, specifying waits=yes if extended SQL tracing of at least level 8 has occurred (i.e. wait events have been traced) :
$ tkprof tracefile output=outputfile waits=yes
Importantly, tkprof will aggregate any wait events in the trace file in to a summary for that statement. This can be useful when trying to identify the type of operation that is taking the longest, such as communicating via SQL*Net, or doing index or table scans, however, it isn't a good idea if you need to view the values of bind variables, identify precise wait events such as which table is being scanned or which particular wait event out of potentially many is taking the time.

In this situation, you have no choice but to analyse the "raw" trace file. The structure of a trace file is not a difficult thing to understand but is it often required to use custom scripts to process them if tkprof is not good enough to show the results you need.