Wednesday, June 17, 2009

How to set trace for others sessions, for your own session and at instance level

Tools to analyse trace files
Up to and including Oracle 10g the tool that is generally used to analyse trace files is called tkprof. This tool formats the trace files that have been generated into a more readable format. Understanding the trace file format seems daunting on first inspection. A good source for details on the trace file format is a metalink note 39817.1. In 10g there is a new tool for formatting trace files called trcsess. This tool has been designed to deal with the new trace facilities that allow trace to be identified based on client identifier or by a combination of service name / module / action. This allows trace to be completed even if connection pooling and multi-threading is used. An individual client in these circumstances could share many different sessions.
Find out where the trace file will be written to
If the user you are using is not a DBA or to be more specific has not been granted access to the data dictionary view V$PARAMETER then you will need to use this technique to find out where your trace files are written to: SQL> set serveroutput on size 1000000 for wra
SQL> declare
2 paramname varchar2(256);
3 integerval binary_integer;
4 stringval varchar2(256);
5 paramtype binary_integer;
6 begin
7 paramtype:=dbms_utility.get_parameter_value('user_dump_dest',integerval,stringval);
8 if paramtype=1 then
9 dbms_output.put_line(stringval);
10 else
11 dbms_output.put_line(integerval);
12 end if;
13 end;
14 /
C:\oracle\admin\sans\udump

PL/SQL procedure successfully completed.

SQL>

If the user you are using has access to the base views then you can do the following instead. SQL> select name,value
2 from v$parameter
3 where name='user_dump_dest';

NAME
----------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
user_dump_dest
C:\oracle\admin\sans\udump


SQL>

Making trace files available
There is an undocumented parameter _trace_files_public that if set to true changes the file permissions in the user_dump_dest directory when trace files are created to allow everyone to read them. This parameter can be checked with the following SQL. Beware that this is an undocumented parameter and should not be routinely set to true as some information in trace files can be used by hackers or malicious users. You can set this parameter by adding the following line to the init.ora file: # allow trace files to be created with public permissions
_trace_files_public=true
# disable this feature:
#_trace_files_public=true
# or =>
_trace_files_public=false

Here is the SQL to check the value of this parameter: SQL> select x.ksppinm name,y.ksppstvl value
2 from sys.x$ksppi x,sys.x$ksppcv y
3 where x.inst_id=userenv('Instance')
4 and y.inst_id=userenv('Instance')
5 and x.indx=y.indx
6 and x.ksppinm='_trace_files_public';

NAME
----------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
_trace_files_public
FALSE


SQL>

Let's start with some examples of how to check trace for another session that is connected to the database.
Now find the SID and SERIAL# of the other session
We are using a simple example and the session we are looking for is for the user SCOTT and we are logged into this session with AS SYSDBA. We need to be logged in as SYS or AS SYSDBA so that we can access the packages DBMS_SUPPORT and DBMS_SYSTEM needed to set trace in another session or in our own session. Again as with the first example about access to v$parameter a user with access to the views V$SESSION and V$PROCESS is needed. First lets find the SID and SERIAL# SQL> connect system/manager@sans as sysdba
Connected.
SQL> col sid for 999999
SQL> col serial# for 999999
SQL> col username for a20
SQL> col osuser for a20
SQL> select s.sid,s.serial#,s.username,s.osuser
2 from v$session s,v$process p
3 where s.paddr=p.addr;

SID SERIAL# USERNAME OSUSER
------- ------- -------------------- --------------------
1 1 SYSTEM
2 1 SYSTEM
3 1 SYSTEM
4 1 SYSTEM
5 1 SYSTEM
6 1 SYSTEM
7 1 SYSTEM
8 1 SYSTEM
9 253 SYSTEM ZULIA\pete
10 20 SCOTT ZULIA\pete

10 rows selected.

SQL>

great the SID and SERIAL# that we need are 10 and 20.
A word about trace levels
Before we use the DBMS_SYSTEM package to set trace in SCOTT's session we need to discuss what levels are. Trace in fact sets an event in the Oracle kernel (what is an event? - An event is simply a flag to the Oracle kernel to tell it to emit some trace messages or to add some additional processing or to activate some new functionality. Some events are used by support analysts and developers to force certain conditions to occur for testing purposes). In our case we want to look at event number 10046 - This event tells the Oracle kernel to emit trace lines and timings. The levels available in Oracle through some of the interfaces used to set trace are:
Level 0 = No statistics generated
Level 1 = standard trace output including parsing, executes and fetches plus more.
Level 2 = Same as level 1.
Level 4 = Same as level 1 but includes bind information
Level 8 = Same as level 1 but includes wait's information
Level 12 = Same as level 1 but includes binds and waits
For a complete list of events that can be set look at the file $ORACLE_HOME/rdmbs/mesg/oraus.msg on Unix or Linux. This file is not shipped on Windows systems. Also setting any event other that trace (10046) should not be done without the guidance of Oracle support.
Set trace in another session using DBMS_SYSTEM
First lets set trace in SCOTT's session using the DBMS_SYSTEM package. Before we do let's turn on timed statistics so that the trace files get timing info and also set the dump file size so that there is plenty of room for the trace being generated. SQL> exec dbms_system.set_bool_param_in_session(10,20,'timed_statistics',true);

PL/SQL procedure successfully completed.

SQL> exec dbms_system.set_int_param_in_session(10,20,'max_dump_file_size',2147483647);

PL/SQL procedure successfully completed.


OK, here we set trace in SCOTT's session SQL> -- now use standard dbms_support interface
SQL> exec dbms_system.set_sql_trace_in_session(10,20,true);

PL/SQL procedure successfully completed.

SQL> -- execute some code
SQL> exec dbms_system.set_sql_trace_in_session(10,20,false);

PL/SQL procedure successfully completed.

SQL>

A second way to set trace in another session - This time setting trace level as well
Next we can again use the DBMS_SYSTEM interface but this time use the set event syntax. This allows us to set any event in the database. This is of course not sanctioned by Oracle support and can cause damage to your database if not done correctly. Use this interface with care and just set 10046 (trace) events. Here is how it is done: SQL> exec dbms_system.set_ev(10,20,10046,8,'');

PL/SQL procedure successfully completed.

SQL> -- execute some code
SQL> exec dbms_system.set_ev(10,20,10046,0,'');

PL/SQL procedure successfully completed.

Installing the DBMS_SUPPORT package
Using the example above we set trace to level 8, you can of course set it to any level you wish from the list we discussed above. Next we will use the DBMS_SUPPORT package to set trace. This package is not installed by default and is in fact undocumented and indeed on some platforms and versions its not even shipped and you will need to talk to Oracle support and get it from metalink. First we will install the package: SQL> -- now do the same with dbms_support
SQL> -- the package has to be installed first - you should ask Oracle first though!
SQL> @%ORACLE_HOME%\rdbms\admin\dbmssupp.sql

Package created.


Package body created.

SQL>

Use DBMS_SUPPORT to set trace in another users session
Next use the interface to again set trace for SCOTT's session that we found earlier. here it is: SQL> exec dbms_support.start_trace_in_session(10,20,waits=>true,binds=>false);

PL/SQL procedure successfully completed.

SQL> -- execute some code
SQL> exec dbms_support.stop_trace_in_session(10,20);

PL/SQL procedure successfully completed.

SQL>

use DBMS_SUPPORT to set trace in your own session
OK, that's how to set trace in SCOTT's session. How do we set trace in our own session. Well first we can use all of the approaches seen above and pass in the SID and SERIAL# for our own session. There are other methods for setting trace in your own session though. The first is again using the DBMS_SUPPORT package. Here it is: SQL> exec dbms_support.start_trace(waits=>true,binds=>false);

PL/SQL procedure successfully completed.

SQL> -- run some code
SQL> exec dbms_support.stop_trace;

PL/SQL procedure successfully completed.

SQL>

Use DBMS_SESSION to set trace in your own session
The next method for setting trace in our own session also is done using a built in package, this time DBMS_SESSION. here it is: SQL> -- in your own session using dbms_session
SQL> exec dbms_session.set_sql_trace(true);

PL/SQL procedure successfully completed.

SQL> -- execut some code
SQL> exec dbms_session.set_sql_trace(false);

PL/SQL procedure successfully completed.

SQL>

using oradebug to set trace through SQL*Plus
oradebug is a debugging utility that is essentially undocumented and is intended for use by Oracle support analysts for various tasks one of which is that it can be used to set trace. oradebug is available from svrmgrl before Oracle 9i and from SQL*Plus after. The first step in using this tool is to find the OS PID or the Oracle PID of the process you want to analyse. You can do this as follows: SQL> connect system/manager@sans as sysdba
Connected.
SQL> col sid for 999999
SQL> col serial# for 999999
SQL> col spid for a8
SQL> col username for a20
SQL> col osuser for a20
1 select s.sid,s.serial#,p.spid,p.pid,s.username,s.osuser
2 from v$session s,v$process p
3* where s.paddr=p.addr
SQL> /

SID SERIAL# SPID PID USERNAME OSUSER
------- ------- -------- ---------- -------------------- --------------------
1 1 2528 2 SYSTEM
2 1 2536 3 SYSTEM
3 1 2540 4 SYSTEM
4 1 2544 5 SYSTEM
5 1 2552 6 SYSTEM
6 1 2604 7 SYSTEM
7 1 2612 8 SYSTEM
8 1 2652 9 SYSTEM
10 343 3740 12 SYS ZULIA\pete
12 70 864 13 SCOTT ZULIA\pete

10 rows selected.


Now that we have found the Operating System PID and Oracle PID (values 864 and 13 in this case) of SCOTT's session we can use this to set trace with the oradebug tool as follows: SQL> -- set the OS PID
SQL> oradebug setospid 864
Windows thread id: 864, image: ORACLE.EXE
SQL> -- or set the Oracle pid
SQL> oradebug setorapid 13
Windows thread id: 864, image: ORACLE.EXE
SQL> -- set the trace file size to unlimitd
SQL> oradebug unlimit
Statement processed.
SQL> -- now turn on trace for SCOTT
SQL> oradebug event 10046 trace name context forever, level 12
Statement processed.
SQL> -- run some queries in another session and then turn trace off
SQL> oradebug event 10046 trace name context off
Statement processed.

Some things to be aware of
You should be aware that some of these methods allow setting of extended trace and some do not. Those that allow extended trace are easy to spot. These methods include ways to set the trace level or include variables suitably named such as waits or binds which again enable extended trace facilities. Some trace methods have a default level such as set sql_trace=true which sets trace to level 8. The rest set trace to normal trace levels.
One other point to note is that we have looked first at ways to set trace in another session to the one you are logged into and also now at ways of setting trace in your own session, there is a third option, which is to set trace for the whole system (i.e for all users sessions), This is not recommended unless you know what you are doing and are monitoring trace as you can quickly fill the file system.
Setting trace at the instance level using the init.ora
Trace can be set in the database initialization file the init.ora file. If you use spfile then you can still use the init.ora file and then copy it to the spfile. Simply add the following line to the init.ora file: sql_trace=true

You can also set timed_statistics and max_dump_file_size in the init.ora file in the same way. i.e timed_statistics=true
max_dump_file_size=unlimited

Trace can also be disabled at the instance level by simply commenting out the same parameter or by deleting it. A commented line is shown next: #sql_trace=true

Or you can set the same parameter to false: sql_trace=false

A second instance level method - setting events
Another method that can be used to set trace at the instance level is to add an event (or multiple events)to the initialization file, the init.ora as described above. Again if you use spfile's then you can copy the init.ora to spfile or use ALTER SYSTEM to set the value in the spfile. Here is an example of setting the trace event 10046 to level 12 in the initialization file: # set the event in the init.ora
event = "10046 trace name context forever, level 12"
# to turn off the event simply comment out the line as follows:
# event = "10046 trace name context forever, level 12"

Using ALTER SESSION to set trace in your own session
The alter session command can be used to set trace for the current session as follows: SQL> alter session set sql_trace=true;

Session altered.

SQL> -- execute some code
SQL> alter session set sql_trace=false;

Session altered.

SQL>

This method can also be used to set timing and dump file size for the current session as follows: SQL> alter session set timed_statistics=true;

Session altered.

SQL> alter session set max_dump_file_size=unlimited;

Session altered.

SQL>

Using ALTER SESSION to set extended trace using events
One last method I want to demonstrate is the alter session syntax to set events. Again stick to 10046 (trace) and do not attempt to set any of the other events that are available without Oracles say so in a supported system. Here is the example of setting trace to level 12, including binds and waits: SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> -- execute some code
SQL> alter session set events '10046 trace name context off';

Session altered.

SQL>

A sample logon trigger to set trace
Quite often you would like trace to be set for a session as soon as the user logs on. Also you may want to be able to set trace for a specific set of users when they log in. This can easily be done with a database logon trigger. Here is a sample trigger. Connected to:
Personal Oracle9i Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> create or replace trigger set_trace after logon on database
2 begin
3 if user not in ('SYS','SYSTEM') then
4 execute immediate 'alter session set timed_statistics=true';
5 execute immediate 'alter session set max_dump_file_size=unlimited';
6 execute immediate 'alter session set sql_trace=true';
7 end if;
8 exception
9 when others then
10 null;
11 end;
12 /

Trigger created.

SQL> sho errors
No errors.
SQL>

OK, that was easy. You can also use the alter session set events '10046 trace name context forever,level 12' syntax if you prefer You can also enable other checks within the trigger if need by using any valid PL/SQL logic that you prefer. One tip is that if you have any troubles with your system trigger and it causes logins to fail is to always include, as I have, an exception handler that calls null; for any error condition. If all else fails you can disable system triggers by setting the parameter _system_trig_enabled=false in the initialisation file. This undocumented / hidden parameter stops the processing of system triggers such as logon triggers.
Using ALTER SYSTEM to set trace at the instance level
Finally you can also use the alter system syntax to set trace at the system level. Here is a simple example: SQL> alter system set sql_trace=true scope=spfile;

System altered.

SQL>
SQL> -- to turn it off again do:
SQL> alter system set sql_trace=false scope=spfile

System altered.

SQL>

Checking the privileges of the packages used to set trace
Some of the packages used in this example have to be run as SYS or you need to be logged in AS SYSDBA or specific privileges need to be granted against those packages for the user that will run them. The default privileges for DBMS_SYSTEM, DBMS_SUPPORT and for DBMS_SESSION are showed next in output from who_can_access.sql (A script that shows privileges hierarchically for an object who's name is passed in). Here they are: -- check who has access to dbms_system
who_can_access: Release 1.0.0.0.0 - Production on Fri Feb 27 12:53:24 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_system
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SYSTEM
====================================================================


Object type is => PACKAGE (TAB)
Privilege => EXECUTE is granted to =>
Role => OEM_MONITOR which is granted to =>
User => SYS

PL/SQL procedure successfully completed.


For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

-- check who has access to dbms_support
who_can_access: Release 1.0.0.0.0 - Production on Fri Feb 27 12:54:29 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_support
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SUPPORT
====================================================================



PL/SQL procedure successfully completed.


For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

-- check who has access to dbms_session
who_can_access: Release 1.0.0.0.0 - Production on Fri Feb 27 12:55:31 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_session
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SESSION
====================================================================


Object type is => PACKAGE (TAB)
Privilege => EXECUTE is granted to =>
Role => PUBLIC

PL/SQL procedure successfully completed.


For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

That's it, there are many ways to set trace in your session, in others sessions and at system level. Also many ways to enable extended trace. Beware of the privileges needed to run some of them and beware of setting events explicitly.
New tracing methods in Oracle 10g - DBMS_MONITOR
Oracle 10g offers a new package to allow sessions to be traced end to end in multi-tier architectures that share sessions using connection pooling or multi-threading. This package allows applications written using for instance JDBC / Java or something like Forte to be traced where it would normally be very difficult to identify a database session belonging to a client as the sessions / clients pairings change with time.
The new functionality works in three levels. You can use the old SID / SERIAL# pairings to identify a session but you can also use a client identifier or a service name / module / action to identify a client session to be traced. The package also offers a set of procedures to allow statistics to be gathered for the same groups. These statistics can then be selected from dynamic views.
let's now take a look at some of the features of this package.
Setting trace with DBMS_MONITOR using SID / SERIAL#
Trace can be set for the current user session, for the current session or for another users session. First lets look at tracing another users session. First we need to get the SID and SERIAL# - we will use SCOTT connected through SQL*Plus as our sample session: SQL> select s.sid,s.serial#,s.username
2 from v$session s, v$process p
3 where s.paddr=p.addr
SQL> /
...
SID SERIAL# USERNAME
---------- ---------- ------------------------------
248 153 SCOTT
258 61 DBSNMP
251 418 SYSMAN
255 961 SYS
249 215
27 rows selected.
SQL>

OK as with previous methods we can use a SID / SERIAL# pair of 248 and 153. lets set trace for this user session: SQL> exec dbms_monitor.session_trace_enable(248,153,TRUE,FALSE);
PL/SQL procedure successfully completed.
SQL> -- execute some sql
SQL> -- in the other session
SQL> -- turn trace off
SQL> exec dbms_monitor.session_trace_disable(248,153);
PL/SQL procedure successfully completed.
SQL>

Setting trace at the session level using DBMS_MONITOR
The same procedures can be used to set trace for the session by omitting the serial#. This is demonstrated next: SQL> exec dbms_monitor.session_trace_enable(248);
PL/SQL procedure successfully completed.
SQL> -- execute some sql in the other session
SQL> -- turn off trace
SQL> exec dbms_monitor.session_trace_disable(248);
PL/SQL procedure successfully completed.
SQL> -- or you can turn it on with
SQL> exec dbms_monitor.session_trace_enable(248,null);
PL/SQL procedure successfully completed.
SQL> -- turn off again with:
SQL> exec dbms_monitor.session_trace_disable(248,null);
PL/SQL procedure successfully completed.
SQL>

Setting trace for the current session using DBMS_MONITOR
Setting trace for the current user session is done by leaving out the SID and SERIAL# altogether by setting them to NULL. Here is an example: SQL> -- trace the current session
SQL> exec dbms_monitor.session_trace_enable(null,null);
PL/SQL procedure successfully completed.
SQL> -- execute some code
SQL> -- turn it off again
SQL> exec dbms_monitor.session_trace_disable(null,null);
PL/SQL procedure successfully completed.
SQL> -- to get waits and binds do
SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);
PL/SQL procedure successfully completed.
SQL> -- execute some code
SQL> -- then turn off tracec
SQL> exec dbms_monitor.session_trace_disable(null,null);
PL/SQL procedure successfully completed.
SQL> -- or turn it on like this
SQL> exec dbms_monitor.session_trace_enable();
PL/SQL procedure successfully completed.
SQL> -- execute some SQL and then turn off trace
SQL> exec dbms_monitor.session_trace_disable();
PL/SQL procedure successfully completed.
SQL>

That completes some of the ways to use DBMS_MONITOR to set trace using SID, SERIAL#, or at the session level or for the current session.
Set trace using a client identifier
Tracing using the client identifier allows trace to be set across multiple sessions as many Oracle shadow processes can work on behalf of one client. Also trace is persistent across all instances and restarts. First we need to see how the client identifier is set. This can be done using the DBMS_SESSION package as follows: SQL> exec dbms_session.set_identifier('pete id');
PL/SQL procedure successfully completed.
SQL>

We can check now for a specific identifier in the V$SESSION view with the client_identifier column. SQL> select s.username,s.client_identifier
2 from v$session s,v$process p
3 where s.paddr=p.addr
4 and client_identifier is not null;
USERNAME
------------------------------
CLIENT_IDENTIFIER
----------------------------------------------------------------
SCOTT
pete id

SQL>

OK, now we can use this information to set trace for this client identifier as follows: SQL> exec dbms_monitor.client_id_trace_enable('pete id',true,false);
PL/SQL procedure successfully completed.
SQL> -- wait for the client session to do something
SQL> -- turn off trace as follows:
SQL> exec dbms_monitor.client_id_trace_disable('pete id');
PL/SQL procedure successfully completed.
SQL>

That was quite easy!. next let's look at setting trace at the service, module action levels.
Setting trace for service/module/action with DBMS_MONITOR
This method of setting trace acts hierarchically. The first level is that trace is set globally for the whole database (all instances) You can override this by setting an instance name in the call to turn on trace. For this example as I am on a single instance database I will leave this parameter at its default. There are three levels to the hierarchy. If we set ACTION to NULL then all actions for the module and service are traced. The next level, if we set MODULE to NULL then all actions for all modules for the specified service name are traced. The trace will be collected into multiple trace files and the new tool trcsess must be used to collate all the trace files into one usable file.
The service name can be set using the package DBMS_SERVICE and the procedure CREATE_SERVICE. Here is an example: SQL> exec dbms_service.create_service('Test Service','test network');
PL/SQL procedure successfully completed.
SQL> -- it can be deleted with
SQL> exec dbms_service.delete_service('Test Service');
PL/SQL procedure successfully completed.
SQL>

The service name can quite often be set already by the tool. It could be used to group together a set of programs / modules that perform some business task. Next let's see how the module and actions can be set. SQL> -- set action
SQL> exec dbms_application_info.set_action('PAYMENT');
PL/SQL procedure successfully completed.
SQL> -- set the module
SQL> exec dbms_application_info.set_module('ACCOUNTS','PAYMENT');
PL/SQL procedure successfully completed.
SQL>

To view the relevant service names, modules and actions for sessions in the database you can use the v$SESSION view as follows: SQL> col service_name for a15 wrapped
SQL> col username for a15 wrapped
SQL> col module for a15 wrapped
SQL> col action for a15 wrapped
SQL> select s.username,s.service_name,s.module,s.action
2 from v$session s,v$process p
3 where s.paddr=p.addr;
...
USERNAME SERVICE_NAME MODULE ACTION
--------------- --------------- --------------- ---------------
SYSMAN SANS
SYSMAN SANS OEM.SystemPool
DBSNMP SYS$USERS emagent@emil (T
NS V1-V3)

DBSNMP SYS$USERS emagent@emil (T
NS V1-V3)

SYS$USERS
SYS SANS ACCOUNTS PAYMENT
SCOTT SANS SQL*Plus
...
29 rows selected.
SQL>

As we deleted our sample service name set up with DBMS_SERVICE.CREATE_SERVICE we will just use the default value SANS inserted by Oracle in our test case. Let's test some of the methods of setting trace with this functionality. SQL> -- set trace for all modules and actions for SANS service name
SQL> exec dbms_monitor.serv_mod_act_trace_enable('SANS',DBMS_MONITOR.ALL_MODULES,DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);
PL/SQL procedure successfully completed.
SQL> -- turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable('SANS');
PL/SQL procedure successfully completed.
SQL> -- now trace all actions for service SANS and module accounts
SQL> exec dbms_monitor.serv_mod_act_trace_enable('SANS','ACCOUNTS',DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);
PL/SQL procedure successfully completed.
SQL> -- now turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable('SANS','ACCOUNTS');
PL/SQL procedure successfully completed.
SQL> -- finally test service SANS, module ACCOUNTS and action PAYMENT
SQL> exec dbms_monitor.serv_mod_act_trace_enable('SANS','ACCOUNTS','PAYMENT',TRUE,FALSE,NULL);
PL/SQL procedure successfully completed.
SQL> -- turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable('SANS','ACCOUNTS','PAYMENT');
PL/SQL procedure successfully completed.
SQL> -- you can turn on or off binds and waits as well or use the waits=>true
SQL> -- syntax instead.
SQL>

OK, that wraps up the new procedures in 10g that can be used to turn on trace in different ways to capture true end to end trace for multi-tier applications. You should also be aware that DBMS_MONITOR also provides procedures to enable statistic gathering at the same levels of client identifier and service name/module/action. These statistics are stored and can then be accessed by selecting from V$SERV_MOD_ACT_STATS and V$CLIENT_STATS views. I will not detail those procedures here as this short paper is concentrating on trace only.
One last idea - use AUTOTRACE in SQL*Plus
OK, one final way to set and get trace, is to use the SQL*Plus AUTOTRACE facilities. There are a few settings that you can use. These are as follows:
set autotrace off - The default - no output
set autotrace on explain - This shows only the optimizer path
set autotrace on statistics - This only shows SQL statistics
set autotrace on - Includes both of the above
set autotrace traceonly - As above but the query output is not displayed
One more final item - CBO trace 10053
One other event that you might like to try and experiment with is the 10053 event. This event traces the Cost Based Optimizer (CBO) and shows all of the plans and costs assigned to them that it tried in its search for the best cost and also is shows how it came to its decision. The 10053 event has two levels 1 and 2. More detail is emitted if level 1 is used rather than level 2. The output is again sent to a trace file in the directory specified by user_dump_dest. The trace is only generated if the SQL is hard parsed and also obviously uses the CBO. To get a trace file you can use any of the methods above that allow the event number to be specified. An example is: SQL> alter session set events '10053 trace name context forever, level 1
Session altered.
SQL> -- execute some SQL to create a CBO trace.
SQL> -- turn CBO trace off
SQL> alter session set events '10053 trace name context off';
Session altered.
SQL>

Tuesday, June 9, 2009

materialized view in Oracle

A materialized view is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a tables data. Materialized views, which store data based on remote tables are also, know as snapshots.

A materialized view can query tables, views, and other materialized views. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term).

For replication purposes, materialized views allow you to maintain copies of remote data on your local node. These copies are read-only. If you want to update the local copies, you have to use the Advanced Replication feature. You can select data from a materialized view as you would from a table or view.

For data warehousing purposes, the materialized views commonly created are aggregate views, single-table aggregate views, and join views.

In this article, we shall see how to create a Materialized View and discuss Refresh Option of the view.

In replication environments, the materialized views commonly created are primary key, rowid, and subquery materialized views.

Primary Key Materialized Views

The following statement creates the primary-key materialized view on the table emp located on a remote database.

SQL7gt;  CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp@remote_db;

Materialized view created.

Note: When you create a materialized view using the FAST option you will need to create a view log on the master tables(s) as shown below:

SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.

Rowid Materialized Views

The following statement creates the rowid materialized view on table emp located on a remote database:

SQL>  CREATE MATERIALIZED VIEW mv_emp_rowid
REFRESH WITH ROWID
AS SELECT * FROM emp@remote_db;

Materialized view log created.

Subquery Materialized Views

The following statement creates a subquery materialized view based on the emp and dept tables located on the remote database:

SQL> CREATE MATERIALIZED VIEW  mv_empdept
AS SELECT * FROM emp@remote_db e
WHERE EXISTS
(SELECT * FROM dept@remote_db d
WHERE e.dept_no = d.dept_no)

REFRESH CLAUSE





[refresh [fast|complete|force]
[on demand | commit]
[start with date] [next date]
[with {primary key|rowid}]]



The refresh option specifies:




  1. The refresh method used by Oracle to refresh data in materialized view
  2. Whether the view is primary key based or row-id based
  3. The time and interval at which the view is to be refreshed




Refresh
Method -
FAST Clause





The
FAST refreshes use the materialized view logs (as seen above) to send the rows
that have changed from master tables to the materialized view.





You
should create a materialized view log for the master tables if you specify the REFRESH FAST clause.





SQL> CREATE MATERIALIZED VIEW LOG ON emp;

Materialized view log created.



Materialized
views are not eligible for fast refresh if the defined subquery contains an
analytic function.





Refresh
Method -
COMPLETE Clause





The
complete refresh re-creates the entire materialized view. If you request a
complete refresh, Oracle performs a complete refresh even if a fast refresh is
possible.





Refresh
Method -
FORCE Clause





When
you specify a FORCE
clause, Oracle will perform a fast refresh if one is possible or a complete
refresh otherwise. If you do not specify a refresh method (FAST, COMPLETE, or FORCE), FORCE is
the default.





PRIMARY
KEY and ROWID Clause





WITH PRIMARY
KEY
is used to create a primary key materialized view i.e. the materialized view is
based on the primary key of the master table instead of ROWID (for ROWID
clause). PRIMARY KEY is the default option. To use the PRIMARY KEY clause you
should have defined PRIMARY KEY on the master table or else you should use
ROWID based materialized views.





Primary
key materialized views allow materialized view master tables to be reorganized
without affecting the eligibility of the materialized view for fast refresh.





Rowid
materialized views should have a single master table and cannot contain any of
the following:



  • Distinct or aggregate functions
  • GROUP BY Subqueries , Joins & Set operations






Timing the refresh





The
START WITH clause tells the database when to perform the first replication from
the master table to the local base table. It should evaluate to a future point
in time. The NEXT clause specifies the interval between refreshes





SQL>  CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST
START WITH SYSDATE
NEXT SYSDATE + 2
WITH PRIMARY KEY
AS SELECT * FROM emp@remote_db;

Materialized view created.


In the above example, the first copy of the
materialized view is made at SYSDATE and the interval at which the refresh has
to be performed is every two days.





Summary





Materialized Views thus offer us flexibility of
basing a view on Primary key or ROWID, specifying refresh methods and
specifying time of automatic refreshes.



ora-01422: exact fetch returns more than requested number of rows

Oracle Documentation says:

“If possible, each table for which changes are applied by an apply process should have a primary key. When a primary key is not possible, Oracle recommends that each table have a set of columns that can be used as a unique identifier for each row of the table. If the tables that you plan to use in your Oracle Streams environment do not have a primary key or a set of unique columns, then consider altering these tables accordingly.”

And then it says:

“In the absence of substitute key columns, primary key constraints, and unique key constraints, an apply process uses all of the columns in the table as the key columns, excluding LOB, LONG, and LONG RAW columns. In this case, you must create an unconditional supplemental log group containing these columns at the source database. Using substitute key columns is preferable when there is no primary key constraint for a table because fewer columns are needed in the row LCR.”

Materialized views in Oracle

Materialized views in Oracle


A materialized view is a stored summary containing precomputes results (originating from an SQL select statement).
As the data is precomputed, materialized views allow for (seemingly) faster dataware query answers
Types of materialized views
There are three types of materialized views:

* Read only materialized view
* Updateable materialized view
* Writeable materialized view

Read only materialized views
Advantages:

* There is no possibility for conflicts as they cannot be updated.
* Complex materialized views are supported

Updateable materialized views
Advantages:

* Can be updated even when disconnected from the master site or master materialized view site.
* Requires fewer resources than multimaster replication.
* Are refreshed on demand. Hence the load on the network might be reduced compared to using multimaster replication because multimaster replication synchronises changes at regular intervalls.

Updateable materialized views require the advnced replication option to be installed.
Writeable materialized views
They are created with the for update clause during creation without then adding the materialized view to a materialized view group. In such a case, the materialized view is updatable, but the changes are lost when the materialized view refreshes.
Writeable materialized views require the advnced replication option to be installed.
Query rewrite
... yet to be finished ..
The query rewrite facility is totally transparent to an application which needs not be aware of the existance of the underlying materialized view.
Refreshing process
Refreshing a materialized view
Refreshing a materialized view synchronizes is with its master table.
Oracle performs the following operations when refreshing a materialized view. In the case of a complete refresh (using dbms_mview.refresh)

1. sys.snap$ and sys.mlog$ are updated to reflect the time of the refresh.
2. The materialized base view is truncated.
3. All rows selected from the master table are inserted into the snapshot base table.
4. sys.slog$ is updated to reflect the time of the refresh.

In the case of a fast refresh, the steps are:

1. sys.snap$ and sys.mlog$ are updated to reflect the time of the refresh.
2. Rows in the materialized base view are deleted.
3. All rows selected from the master table are inserted into the snapshot base table.
4. sys.slog$ is updated to reflect the time of the refresh.
5. Rows that are not needed anymore for a refresh by any materialized view are deleted from the materialized view log (.MLOG$_table)

If a materialized view is being refreshed can be checked by querying the type of v$lock: if the type is JI a refresh is being performed.
The following query checks for this:

select
o.owner "Owner",
o.object_name "Mat View",
username, "Username",
s.sid "Sid"
from
v$lock l,
dba_objects dba_objects o,
v$session s
where
o.object_id = l.idl and
l.type ='JI' and
l.lmode = 6 and
s.sid = l.sid and
o.object_type = 'TABLE'

Overview

Materialized views are a data warehousing/decision support system tool that can increase by many orders of magnitude the speed of queries that access a large number of records. In basic terms, they allow a user to query potentially terabytes of detail data in seconds. They accomplish this by transparently using pre-computed summarizations and joins of data. These pre-computed summaries would typically be very small compared to the original source data.

In this article you'll find out what materialized views are, what they can do and, most importantly, how they work - a lot of the ' magic ' goes on behind the scenes. Having gone to the trouble of creating it, you'll find out how to make sure that your materialized view is used by all queries to which the view is capable of providing the answer. Sometimes, you know Oracle could use the materialized view, but it is not able to do so simply because it lacks important information.

Setup of Materialized Views

There is one mandatory INIT.ORA parameter necessary for materialized views to function, this is the COMPATIBLE parameter. The value of COMPATIBLE should be set to 8.1.0, or above, in order for query rewrites to be functional. If this value is not set appropriately, query rewrite will not be invoked.

There are two other relevant parameters that may be set at either the system-level via the INIT.ORA file, or the session-level via the ALTER SESSION command.

*

QUERY_REWRITE_ENABLED

Unless the value of this parameter is set to TRUE, query rewrites will not take place. The default value is FALSE.

*

QUERY REWRITE INTEGRITY

This parameter controls how Oracle rewrites queries and may
be set to one of three values:

ENFORCED - Queries will be rewritten using only constraints and rules that are enforced and guaranteed by Oracle. There are mechanisms by which we can tell Oracle about other inferred relationships, and this would allow for more queries to be rewritten, but since Oracle does not enforce those relationships, it would not make use of these facts at this level.

TRUSTED - Queries will be rewritten using the constraints that are enforced by Oracle, as well as any relationships existing in the data that we have told Oracle about, but are not enforced by the database.

STALE TOLERATED - Queries will be rewritten to use materialized views even if Oracle knows the data contained in the materialized view is ' stale ' (out-of-sync with the details). This might be useful in an environment where the summary tables are refreshed on a recurring basis, not on commit, and a slightly out-of-sync answer is acceptable.

The needed privileges are as follows:

* CREATE SESSION
* CREATE TABLE
* CREATE MATERIALIZED VIEW
* QUERY REWRITE

Finally, you must be using the Cost Based Optimizer CBO in order to make use of query rewrite. If you do not use the CBO, query rewrite will not take place.

Example

The example will demonstrate what a materialized view entails. The concept is that of reducing the execution time of a long running query transparently, by summarizing data in the database. A query against a large table will be transparently rewritten into a query against a very small table, without any loss of accuracy in the answer. For the example we create our own big table based on the system view ALL_OBJECTS.

Prepare the large table BIGTAB:

sqlplus scott/tiger
set echo on
set termout off

drop table bigtab;

create table bigtab
nologging
as
select * from all_objects
union all
select * from all_objects
union all
select * from all_objects
/

insert /*+ APPEND */ into bigtab
select * from bigtab;
commit;
insert /*+ APPEND */ into bigtab
select * from bigtab;
commit;
insert /*+ APPEND */ into bigtab
select * from bigtab;
commit;

analyze table bigtab compute statistics;
select count(*) from bigtab;

COUNT(*)
----------
708456

Run query against this BIGTABLE

Initially this quewry will require a full scan of the large table.

set autotrace on
set timing on
select owner, count(*) from bigtab group by owner;

OWNER COUNT(*)
------------------------------ ----------
CTXSYS 6264
ELAN 1272
HR 816
MDSYS 5640
ODM 9768
ODM_MTR 288
OE 2064
OLAPSYS 10632
ORDPLUGINS 696
ORDSYS 23232
OUTLN 168
PM 216
PUBLIC 278184
QS 984
QS_ADM 168
QS_CBADM 576
QS_CS 552
QS_ES 936
QS_OS 936
QS_WS 936
SCOTT 264
SH 4176
SYS 324048
SYSTEM 15096
TEST 4536
WKSYS 6696
WMSYS 3072
XDB 6240

28 rows selected.

Elapsed: 00:00:07.06

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
(Cost=2719 Card=28 Bytes=140)
1 0 SORT (GROUP BY) (Cost=2719 Card=28 Bytes=140)
2 1 TABLE ACCESS (FULL) OF 'BIGTAB'
(Cost=1226 Card=708456 Bytes=3542280)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
19815 consistent gets
18443 physical reads
0 redo size
973 bytes sent via SQL*Net to client
510 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
28 rows processed

In order to get the aggregate count, we must count 700'000+ records on over 19800 blocks. If you need this summary often per day, you can avoid counting the details each and every time by creating a materialized view of this summary data.

Create the Materialized View

sqlplus scott/tiger

grant query rewrite to scott;
alter session set query_rewrite_enabled=true;
alter session set query_rewrite_integrity=enforced;

create materialized view mv_bigtab
build immediate
refresh on commit
enable query rewrite
as
select owner, count(*)
from bigtab
group by owner
/

analyze table mv_bigtab compute statistics;

Basically, what we've done is pre-calculate the object count, and define this summary information as a materialized view. We have asked that the view be immediately built and populated with data. You'll notice that we have also specified REFRESH ON COMMIT and ENABLE QUERY REWRITE. Also notice that we may have created a materialized view, but when we ANALYZE, we are analyzing a table. A materialized view creates a real table, and this table may be indexed, analyzed, and so on.

Now let's see the materialized view in action by issuing the same query again

set timing on
set autotrace traceonly
select owner, count(*)
from bigtab
group by owner;
set autotrace off
set timing off

28 rows selected.

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
(Cost=2 Card=28 Bytes=252)
1 0 TABLE ACCESS (FULL) OF 'MV_BIGTAB'
(Cost=2 Card=28 Bytes=252)

Statistics
----------------------------------------------------------
11 recursive calls
0 db block gets
17 consistent gets
0 physical reads
0 redo size
973 bytes sent via SQL*Net to client
510 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
28 rows processed

No physical I/O this time around as the data was found in the cache. Our buffer cache will be much more efficient now as it has less to cache. W could not even begin to cache the previous query's working set, but now I can. Notice how our query plan shows we are now doing a full scan of the MV_BIGTAB table, even though we queried the detail table BIGTAB. When the SELECT OWNER, ... query is issued, the database automatically directs it to the materialized view.

Now, add a new row to the BIGTAB table and commit te change

insert into bigtab
(owner, object_name, object_type, object_id)
values ('Martin', 'Zahn', 'Akadia', 1111111);

commit;

set timing on
set autotrace traceonly
select owner, count(*)
from bigtab
where owner = 'Martin'
group by owner;
set autotrace off
set timing off

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
(Cost=2 Card=1 Bytes=9)
1 0 TABLE ACCESS (FULL) OF 'MV_BIGTAB'
(Cost=2 Card=1 Bytes=9)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
439 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

The analysis shows that we scanned the materialized view MV_BIGTAB and found the new row. By specifying REFRESH ON COMMIT in our original definition of the view, we requested that Oracle maintain synchronization between the view and the details, the summary will be maintained as well.

Uses of Materialized Views

This is relatively straightforward and is answered in a single word - performance. By calculating the answers to the really hard questions up front (and once only), we will greatly reduce the load on the machine, We will experience:

*

Less physical reads - There is less data to scan through.
*

Less writes - We will not be sorting/aggregating as frequently.
*

Decreased CPU consumption - We will not be calculating aggregates and functions on the data, as we will have already done that.
*

Markedly faster response times - Our queries will return incredibly quickly when a summary is used, as opposed to the details. This will be a function of the amount of work we can avoid by using the materialized view, but many orders of magnitude is not out of the question.

Materialized views will increase your need for one resource - more permanently allocated disk. We need extra storage space to accommodate the materialized views, of course, but for the price of a little extra disk space, we can reap a lot of benefit.

Materialized views work best in a read-only, or read-intensive environment. They are not designed for use in a high-end OLTP environment. They will add overhead to modifications performed on the base tables in order to capture the changes.

There are concurrency issues with regards to rising the REFRESH ON COMMIT option. Consider the summary example from before. Any rows that are inserted or deleted from this table will have to update one of 28 rows in the summary table in order to maintain the count in real time. This does not preclude the use of materialized views in an OLTP environment. For example if you use full refreshes on a recurring basis (during off-peak time) there will be no overhead added to the modifications, and there would be no concurrency issues. This would allow you to report on yesterday's activities, for example, and not query the live OLTP data for reports.

How Materialized Views Work

Materialized views may appear to be hard to work with at first. There will be cases where you create a materialized view, and you know that the view holds the answer to a certain question but, for some reason, Oracle does not. The more meta data provided, the more pieces of information about the underlying data you can give to Oracle, the better.

So, now that we can create a materialized view and show that it works, what are the steps Oracle will undertake to rewrite our queries? Normally, when QUERY REWRITE ENABLED is set to FALSE, Oracle will take your SQL as is, parse it, and optimize it. With query rewrites enabled, Oracle will insert an extra step into this process. After parsing, Oracle will attempt to rewrite the query to access some materialized view, instead of the actual table that it references. If it can perform a query rewrite, the rewritten query (or queries) is parsed and then optimized along with the original query. The query plan with the lowest cost from this set is chosen for execution. If it cannot rewrite the query, the original parsed query is optimized and executed as normal.

Conclusion

Summary table management, another term for the materialized view, has actually been around for some time in tools such as Oracle Discoverer. If you ran a query in SQL*PLUS, or from your Java JDBC client, then the query rewrite would not (could not) take place. Furthermore, the synchronization between the details (original source data) and the summaries could not be performed or validated for you automatically, since the tool ran outside the database.

Furthermore, since version 7.0, the Oracle database itself has actually implemented a feature with many of the characteristics of summary tables - the Snapshot. This feature was initially designed to support replication, but many would use it to ' pre-answer ' large queries. So, we would have snapshots that did not use a database link to replicate data from database to database, but rather just summarized or pre-joined frequently accessed data. This was good, but without any query rewrite capability, it was still problematic. The application had to know to use the summary tables in the first place, and this made the application more complex to code and maintain. If we added a new summary then we would have to find the code that could make use of it, and rewrite that code.

In Oracle 8.1.5 Oracle took the query rewriting capabilities from tools like Discoverer, the automated refresh and scheduling mechanisms from snapshots (that makes the summary tables ' self maintaining ' ), and combined these with the optimizer's ability to find the best plan out of many alternatives. This produced the materialized view.

With all of this functionality centralized in the database, now every application can take advantage of the automated query rewrite facility, regardless of whether access to the database is via SQL*PLUS, JDBC, ODBC, Pro*C, OCI, or some third party tool. Every Oracle 8i enterprise database can have summary table management. Also, since everything takes place inside the database, the details can be easily synchronized with the summaries, or at least the database knows when they aren't synchronized, and might bypass stale summaries.



Tuesday, June 2, 2009

Checking for Apply Errors

Checking for Apply Errors
To check for apply errors, run the following query:COLUMN APPLY_NAME HEADING 'ApplyProcessName' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'SourceDatabase' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'LocalTransactionID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages inErrorTransaction' FORMAT 99999999
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;
If there are any apply errors, then your output looks similar to the following:Apply Local Messages in
Process Source Transaction Error
Name Database ID Error Number Error Message Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.NET 1.62.948 1403 ORA-01403: no data f 1
_MULT3 ound
APPLY_FROM MULT2.NET 1.54.948 1403 ORA-01403: no data f 1
_MULT2 ound
If there are apply errors, then you can either try to reexecute the transactions that encountered the errors, or you can delete the transactions. If you want to reexecute a transaction that encountered an error, then first correct the condition that caused the transaction to raise an error.
If you want to delete a transaction that encountered an error, then you might need to resynchronize data manually if you are sharing data between multiple databases. Remember to set an appropriate session tag, if necessary, when you resynchronize data manually.
-- regards,Rohit Sinha"It takes a minute to have a crush an hour to like someone and a day to love someone but it takes a lifetime to forget someone"

Checking for Apply Errors

Checking for Apply Errors
To check for apply errors, run the following query:COLUMN APPLY_NAME HEADING 'ApplyProcessName' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'SourceDatabase' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'LocalTransactionID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages inErrorTransaction' FORMAT 99999999
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;
If there are any apply errors, then your output looks similar to the following:Apply Local Messages in
Process Source Transaction Error
Name Database ID Error Number Error Message Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.NET 1.62.948 1403 ORA-01403: no data f 1
_MULT3 ound
APPLY_FROM MULT2.NET 1.54.948 1403 ORA-01403: no data f 1
_MULT2 ound
If there are apply errors, then you can either try to reexecute the transactions that encountered the errors, or you can delete the transactions. If you want to reexecute a transaction that encountered an error, then first correct the condition that caused the transaction to raise an error.
If you want to delete a transaction that encountered an error, then you might need to resynchronize data manually if you are sharing data between multiple databases. Remember to set an appropriate session tag, if necessary, when you resynchronize data manually.
-- regards,Rohit Sinha"It takes a minute to have a crush an hour to like someone and a day to love someone but it takes a lifetime to forget someone"

Troubleshooting a Streams Environment

Troubleshooting Capture Problems
If a capture process is not capturing changes as expected, or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:
Is the Capture Process Enabled?
Is the Capture Process Current?
Are Required Redo Log Files Missing?
Is a Downstream Capture Process Waiting for Redo Data?
Are You Trying to Configure Downstream Capture without DBMS_CAPTURE_ADM?
Are More Actions Required for Downstream Capture without a Database Link?
See Also:
Chapter 2, "Streams Capture Process"
Chapter 11, "Managing a Capture Process"
Chapter 20, "Monitoring Streams Capture Processes"

Is the Capture Process Enabled?
A capture process captures changes only when it is enabled.
You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE data dictionary view. For example, to check whether a capture process named capture is enabled, run the following query:SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';
If the capture process is disabled, then your output looks similar to the following:STATUS
--------
DISABLED
If the capture process is disabled, then try restarting it. If the capture process is aborted, then you might need to correct an error before you can restart it successfully.
To determine why the capture process aborted, query the DBA_CAPTURE data dictionary view or check the trace file for the capture process. The following query shows when the capture process aborted and the error that caused it to abort:COLUMN CAPTURE_NAME HEADING 'CaptureProcessName' FORMAT A10
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40
SELECT CAPTURE_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE
FROM DBA_CAPTURE WHERE STATUS='ABORTED';
Are Required Redo Log Files Missing?When a capture process is started or restarted, it might need to scan redo log files that were generated before the log file that contains the start SCN. You can query the DBA_CAPTURE data dictionary view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and results in the following error in a capture process trace file:ORA-01291: missing logfile
If you see this error, then try restoring any missing redo log file and restarting the capture process. You can check the V$LOGMNR_LOGS dynamic performance view to determine the missing SCN range, and add the relevant redo log files. A capture process needs the redo log file that includes the required checkpoint SCN and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process.If you are using the flash recovery area feature of Recovery Manager (RMAN) on a source database in a Streams environment, then RMAN might delete archived redo log files that are required by a capture process. RMAN might delete these files when the disk space used by the recovery-related files is nearing the specified disk quota for the flash recovery area. To prevent this problem in the future, complete one or more of the following actions:Increase the disk quota for the flash recovery area. Increasing the disk quota makes it less likely that RMAN will delete a required archived redo log file, but it will not always prevent the problem.Configure the source database to store archived redo log files in a location other than the flash recovery area. A local capture process will be able to use the log files in the other location if the required log files are missing in the flash recovery area. In this case, a database administrator must manage the log files manually in the other location.See Also:"ARCHIVELOG Mode and a Capture Process""First SCN and Start SCN""Displaying the Registered Redo Log Files for Each Capture Process"Oracle Database Backup and Recovery Basics and Oracle Database Backup and Recovery Advanced User's Guide for more information about the flash recovery area featureIs a Downstream Capture Process Waiting for Redo Data?If a downstream capture process is not capturing changes, then it might be waiting for redo data to scan. Redo log files can be registered implicitly or explicitly for a downstream capture process. Redo log files registered implicitly typically are registered in one of the following ways:For a real-time downstream capture process, redo transport services use the log writer process (LGWR) to transfer the redo data from the source database to the standby redo log at the downstream database. Next, the archiver at the downstream database registers the redo log files with the downstream capture process when it archives them.For an archived-log downstream capture process, redo transport services transfer the archived redo log files from the source database to the downstream database and register the archived redo log files with the downstream capture process.If redo log files are registered explicitly for a downstream capture process, then you must manually transfer the redo log files to the downstream database and register them with the downstream capture process.Regardless of whether the redo log files are registered implicitly or explicitly, the downstream capture process can capture changes made to the source database only if the appropriate redo log files are registered with the downstream capture process. You can query the V$STREAMS_CAPTURE dynamic performance view to determine whether a downstream capture process is waiting for a redo log file. For example, run the following query for a downstream capture process named strm05_capture:SELECT STATE FROM V$STREAMS_CAPTURE WHERE CAPTURE_NAME='STRM05_CAPTURE';
If the capture process state is either WAITING FOR DICTIONARY REDO or WAITING FOR REDO, then verify that the redo log files have been registered with the downstream capture process by querying the DBA_REGISTERED_ARCHIVED_LOG and DBA_CAPTURE data dictionary views. For example, the following query lists the redo log files currently registered with the strm05_capture downstream capture process:COLUMN SOURCE_DATABASE HEADING 'SourceDatabase' FORMAT A15
COLUMN SEQUENCE# HEADING 'SequenceNumber' FORMAT 9999999
COLUMN NAME HEADING 'Archived Redo LogFile Name' FORMAT A30
COLUMN DICTIONARY_BEGIN HEADING 'DictionaryBuildBegin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'DictionaryBuildEnd' FORMAT A10
SELECT r.SOURCE_DATABASE,
r.SEQUENCE#,
r.NAME,
r.DICTIONARY_BEGIN,
r.DICTIONARY_END
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE c.CAPTURE_NAME = 'STRM05_CAPTURE' AND
r.CONSUMER_NAME = c.CAPTURE_NAME;
If this query does not return any rows, then no redo log files are registered with the capture process currently. If you configured redo transport services to transfer redo data from the source database to the downstream database for this capture process, then make sure the redo transport services are configured correctly. If the redo transport services are configured correctly, then run the ALTER SYSTEM ARCHIVE LOG CURRENT statement at the source database to archive a log file. If you did not configure redo transport services to transfer redo data, then make sure the method you are using for log file transfer and registration is working properly. You can register log files explicitly using an ALTER DATABASE REGISTER LOGICAL LOGFILE statement.If the downstream capture process is waiting for redo, then it also is possible that there is a problem with the network connection between the source database and the downstream database. There also might be a problem with the log file transfer method. Check your network connection and log file transfer method to ensure that they are working properly.If you configured a real-time downstream capture process, and no redo log files are registered with the capture process, then try switching the log file at the source database. You might need to switch the log file more than once if there is little or no activity at the source database.Also, if you plan to use a downstream capture process to capture changes to historical data, then consider the following additional issues:Both the source database that generates the redo log files and the database that runs a downstream capture process must be Oracle Database 10g databases.The start of a data dictionary build must be present in the oldest redo log file added, and the capture process must be configured with a first SCN that matches the start of the data dictionary build.The database objects for which the capture process will capture changes must be prepared for instantiation at the source database, not at the downstream database. In addition, you cannot specify a time in the past when you prepare objects for instantiation. Objects are always prepared for instantiation at the current database SCN, and only changes to a database object that occurred after the object was prepared for instantiation can be captured by a capture process.
Troubleshooting Propagation Problems
If a propagation is not propagating changes as expected, then use the following checklist to identify and resolve propagation problems:
Does the Propagation Use the Correct Source and Destination Queue?
Is the Propagation Enabled?
Are There Enough Job Queue Processes?
Is Security Configured Properly for the ANYDATA Queue?
See Also:
Chapter 3, "Streams Staging and Propagation"
Chapter 12, "Managing Staging and Propagation"
"Monitoring Streams Propagations and Propagation Jobs"

Does the Propagation Use the Correct Source and Destination Queue?
If messages are not appearing in the destination queue for a propagation as expected, then the propagation might not be configured to propagate messages from the correct source queue to the correct destination queue.
For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2, run the following query:COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35
COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35
SELECT
p.SOURCE_QUEUE_OWNER'.'
p.SOURCE_QUEUE_NAME'@'
g.GLOBAL_NAME SOURCE_QUEUE,
p.DESTINATION_QUEUE_OWNER'.'
p.DESTINATION_QUEUE_NAME'@'
p.DESTINATION_DBLINK DESTINATION_QUEUE
FROM DBA_PROPAGATION p, GLOBAL_NAME g
WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';
Your output looks similar to the following:Source Queue Destination Queue
----------------------------------- -----------------------------------
STRMADMIN.STREAMS_QUEUE@DBS1.NET STRMADMIN.STREAMS_QUEUE@DBS2.NET
If the propagation is not using the correct queues, then create a new propagation. You might need to remove the existing propagation if it is not appropriate for your environment.
Is the Propagation Enabled?
For a propagation job to propagate messages, the propagation must be enabled. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled.
You can find the following information about a propagation:
The database link used to propagate messages from the source queue to the destination queue
Whether the propagation is ENABLED, DISABLED, or ABORTED
The date of the last error, if there are any propagation errors
If there are any propagation errors, then the error number of the last error
The error message of the last error, if there are any propagation errors
For example, to check whether a propagation named streams_propagation is enabled, run the following query:COLUMN DESTINATION_DBLINK HEADING 'DatabaseLink' FORMAT A10
COLUMN STATUS HEADING 'Status' FORMAT A8
COLUMN ERROR_DATE HEADING 'ErrorDate'
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A50

SELECT DESTINATION_DBLINK,
STATUS,
ERROR_DATE,
ERROR_MESSAGE
FROM DBA_PROPAGATION
WHERE PROPAGATION_NAME = 'STREAMS_PROPAGATION';
If the propagation is disabled currently, then your output looks similar to the following:Database Error
Link Status Date Error Message
---------- -------- --------- --------------------------------------------------
INST2.NET DISABLED 27-APR-05 ORA-25307: Enqueue rate too high, flow control
enabled
Checking for Apply Errors
To check for apply errors, run the following query:COLUMN APPLY_NAME HEADING 'ApplyProcessName' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'SourceDatabase' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'LocalTransactionID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages inErrorTransaction' FORMAT 99999999
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;
If there are any apply errors, then your output looks similar to the following:Apply Local Messages in
Process Source Transaction Error
Name Database ID Error Number Error Message Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.NET 1.62.948 1403 ORA-01403: no data f 1
_MULT3 ound
APPLY_FROM MULT2.NET 1.54.948 1403 ORA-01403: no data f 1
_MULT2 ound

Views, monitoring and troubleshooting streams

Views, monitoring and troubleshooting
select capture_name, queue_name, ERROR_NUMBER, ERROR_MESSAGE
from dba_capture where status != 'ENABLED'

select * FROM DBA_CAPTURE;
select * from dba_propagation;

SELECT r.CONSUMER_NAME,
r.SOURCE_DATABASE,
r.SEQUENCE#,
r.NAME,
r.DICTIONARY_BEGIN,
r.DICTIONARY_END
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

#May safely remove these logs
SELECT * FROM DBA_LOGMNR_PURGED_LOG;

SELECT * FROM DBA_CAPTURE_PARAMETERS;
SELECT * FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
Capture troubleshooting--View capture statusSELECT c.capture_name, SUBSTR (s.program, INSTR (s.program, '(') + 1, 4) process_name, c.SID, c.serial#, c.state, c.total_messages_captured, c.total_messages_enqueued, c.enqueue_time last_enqueue, sysdateFROM v$streams_capture c, v$session sWHERE c.SID = s.SID AND c.serial# = s.serial#;Capture rules--Do not rely on DBA_STREAMS_TABLE_RULES, if you uncleanely dropped --STRMADMIN user or something, old unused rules will be thereselect DBA_RULES.* from dba_capture, DBA_RULE_SET_RULES, DBA_RULES where DBA_RULE_SET_RULES.rule_set_name (+)= dba_capture.rule_set_nameand DBA_RULES.rule_name (+)= DBA_RULE_SET_RULES.rule_name
#Is there a capture history?select * from DBA_HIST_STREAMS_CAPTURE
Propagationselect * from dba_propagation where propagation_name = 'PROPAGATION_TEST'Propagation rulesselect DBA_RULES.* from dba_propagation, DBA_RULE_SET_RULES, DBA_RULES where DBA_RULE_SET_RULES.rule_set_ name (+)= dba_propagation.rule_set_nameand DBA_RULES.rule_name (+)= DBA_RULE_SET_RULES.rule_name

Troubleshooting NO DATA FOUND in apply for update/delete

ORA-01403: no data found
ORA-01403: no data found
ORA-06512: at "SYS.LCR$_ROW_RECORD", line 419
ORA-06512: at "MTS_APPLY.TABLE_HANDLERS", line 32
ORA-06512: at line 1
Remember that primary key columns on destination are always evaluated.
You'll get your job very easy using this query:
select tc.owner, tc.table_name, tc.column_name,
a.COMPARE_OLD_ON_DELETE, a.COMPARE_OLD_ON_UPDATE, decode(k.column_name, null, 'N', 'Y') manual_key_column
from dba_tab_columns tc, DBA_APPLY_TABLE_COLUMNS a, dba_apply_key_columns k
where
a.OBJECT_OWNER (+)= tc.owner and a.OBJECT_NAME (+)= tc.TABLE_NAME and a.COLUMN_NAME (+)= tc.COLUMN_NAME
and k.OBJECT_OWNER (+)= tc.owner and k.OBJECT_NAME (+)= tc.TABLE_NAME and k.COLUMN_NAME (+)= tc.COLUMN_NAME
and owner='MTS_OWNER'
and table_name='AGENTS'
#set pk manually, use null on column_list to reset
execute DBMS_APPLY_ADM.SET_KEY_COLUMNS(object_name => 'MTS_OWNER.AGENTS', column_list => 'ID,NAME');
select * from dba_apply_key_columns
#set old values to be compared, default is true for all columns
execute DBMS_APPLY_ADM.COMPARE_OLD_VALUES(object_name => 'MTS_OWNER.AGENTS', column_list => '*', operation => '*', compare => false);
select * from DBA_APPLY_TABLE_COLUMNS order by 1, 2, 3

Untested
Remove stream Metalink note 276648.1
Multi version data dictionary refer to Metalink note 212044.1

Remove/Uninstall Streams
#10g
begin
DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
end;
SELECT * FROM DBA_QUEUE_TABLES order by owner, queue_table
SELECT * FROM DBA_QUEUES order by owner

SELECT * FROM DBA_APPLY_DML_HANDLERS
select * from DBA_QUEUE_SCHEDULES
select * from DBA_STREAMS_COLUMNS;
select * from DBA_STREAMS_ADMINISTRATOR;
select * from DBA_STREAMS_RULES;
select * from DBA_STREAMS_TABLE_RULES;
select * from SYS.DBA_STREAMS_UNSUPPORTED;