Sunday, May 31, 2009

How to gather statistics on data dictionary objects in Oracle 10g

How to gather statistics on data dictionary objects in Oracle 10g

Before Oracle database 10g Oracle explicitly recommeded not to gather statistics on data dictionary objects.
As of Oracle database 10g Oracle explicitly does recommend to gather statistics on data dictionary objects. As you might know, there is an automatically created SCHEDULER JOB in every 10g database which runs every night and checks for object which have either no statistics at all or for which the statistics have become STALE (which means stat at least 10% of the values have changed). This job is call GATHER_STATS_JOB and belongs to the autotask job class. It uses a program which again call a procedure from built in package DBMS_STATS which does the statistics collection. This feature only works if the initialization parameter STATISTICS_LEVEL is set to TYPICAL at least (which is the DEFAULT in 10g) and it utilizes the TABLE MONITORING feature. TABLE MONITORING is enabled for all tables in 10g by DEFAULT. One question which pops uo in my seminars frequently is “Does this job also collect statistics on the data dictionary objects as well?” The answer is not 42 but “YES, it does!” and here is the proof for this:
– first let us check if dbms_stats.gather_database_stats collect statistics for the data dictionary:
SQL> select count(*) from tab$;
COUNT(*) ———- 1227

SQL> create table t2 (col1 number);
Table created.

SQL> select count(*) from tab$;
COUNT(*) ———- 1228

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS ———- 1213

SQL> exec dbms_stats.gather_database_stats;
PL/SQL procedure successfully completed.

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’; NUM_ROWS ———- 1228 – IT DOES! – and now let’s see if the job does also: SQL> create table t3 (col1 number);
Table created.

SQL> create table t4 (col1 number);
Table created.

SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS ———- 1228

– gather_stats_job run manually from DATABASE CONTROL !!! SQL> select NUM_ROWS from dba_tables where table_name=’TAB$’;
NUM_ROWS ———- 1230

– and IT ALSO DOES! — even though there were not even 0.1% of the values changed it did! So when should we gahter statistics for the data dictionary manually? Oracle recommends to collect them when a significat nnumber of changes were applied to the data dictionary, like dropping significant numbers of partions and creating new ones dropping tables, indexes, creating new ones and so on. But this only if it is a signifcant number of changes and you cannot wait for the next automatically scheduled job run.

Friday, May 29, 2009

Troubleshooting Apply problems

Troubleshooting Apply problems
1) If Apply process is not applying changes as expected:
a. Please make sure that the apply process is enabled.
b. Please make sure that the apply Queue receiving the messages to be applied.
c. Please check the apply errors in error queue.
2) Common Apply Issues:
a) a) ORA-1403 no data found: It occurs typically when apply process try to update a row using old values in lcr and the lcr’s old values do not match with the current values in the destination table. This can be avoided either using apply parameter “_CMPKEY_ONLY” set to ‘Y’ and executing the apply errors OR alternatively by manually updating the destination table data to match the old values of the LCR. Please see metalink note 265201.1.
SQL> select * from dba_apply_error;APPLY_NAME QUEUE_NAME------------------------------ ------------------------------QUEUE_OWNER LOCAL_TRANSACTION_ID------------------------------ ----------------------SOURCE_DATABASE--------------------------------------------------------------------------------SOURCE_TRANSACTION_ID SOURCE_COMMIT_SCN MESSAGE_NUMBER ERROR_NUMBER---------------------- ----------------- -------------- ------------ERROR_MESSAGE--------------------------------------------------------------------------------RECIPIENT_ID RECIPIENT_NAME MESSAGE_COUNT ERROR_CREATION_------------ ------------------------------ ------------- ---------------COR_APPL01 CNRTD_QSTRMADMIN 12.46.4005CNRT22.9.5528 7.0903E+10 1 1403ORA-01403: no data found88 STRMADMIN 1 20-APR-07
SQL> set serveroutput onSQL> exec print_transaction('12.46.4005');
----- Local Transaction ID: 12.46.4005----- Source Database: CNRT----Error in Message: 1----Error Number: 1403----Message Text: ORA-01403: no data found --message: 1type name: SYS.LCR$_ROW_RECORDsource database: CNRTowner: STRMADMINobject: HEART_BEATis tag null: Ycommand_type: UPDATESCN: 70903367413COMMIT SCN:old(1): ID1old(2): MSG_TIME20-APR-07 07.34.56.000000 PMnew(1): MSG_TIME20-APR-07 07.35.40.000000 PM
PL/SQL procedure successfully completed.
SQL> select * from heart_beat;ID MSG_TIME---------- -------------------------------------1 19-APR-07 07.46.08.000000 PM
SQL> exec dbms_apply_adm.set_parameter('COR_APPL01','_CMPKEY_ONLY','Y');PL/SQL procedure successfully completed.
SQL> exec dbms_apply_Adm.execute_all_errors;PL/SQL procedure successfully completed.
SQL> select * from dba_apply_error; no rows selected
SQL> exec dbms_apply_adm.set_parameter('COR_APPL01','_CMPKEY_ONLY','N');PL/SQL procedure successfully completed.
b) ORA-26687: no instantiation SCN provided forin source database . Please make sure that the object on source database prepared for instantiation.
select table_name, scn, SUPPLEMENTAL_LOG_DATA_PK PK, SUPPLEMENTAL_LOG_DATA_UI UK, SUPPLEMENTAL_LOG_DATA_FK FK, SUPPLEMENTAL_LOG_DATA_ALL as "ALL"from DBA_CAPTURE_PREPARED_TABLES where table_name = ‘tabname';’;
c)

Thursday, May 28, 2009

Streams Replication Heartbeat

Streams Replication Heartbeat
Heartbeat table in a Streams replication is very useful, to track the status of the replication. Instead of going to all monitoring tables, it can give us a status at a glance.
We can use following example to implement Streams Heartbeat. Streams replication should already be configured to implement this code.
I've used SCOTT schema to create this table.
------ source site ------
create table heartbeat_monitor(source varchar2(10),last_update timestamp default systimestamp)/
1. SOURCE column, will store the details of the global_name of the site.2. LAST_UPDATE column, will store the last activity time.
To replicate the data, we'll need a JOB to insert the required details.
scott@sourcedb> variable jobno number;
scott@sourcedb>begindbms_job.submit(:jobno, 'insert into scott.heartbeat_monitor (source) select global_name from global_name;', sysdate, 'sysdate+60/(60*60*24)');commit;end;/
Secondly, I've created another job to delete entries older than 2 days, from the heartbeat_monitor table, so that heartbeat_monitor table does not become a problem
scott@sourcedb>variable jobno number;begindbms_job.submit(:jobno, 'delete from scott.heartbeat_monitor where last_update < sysdate -2 and source = (select global_name from global_name);', sysdate, 'sysdate+60/(60)');commit;end;/
If we have 2-way replication configured, then we'll also need similar jobs on target site
------ target site ------
scott@targetdb> variable jobno number;
scott@targetdb>begindbms_job.submit(:jobno, 'insert into scott.heartbeat_monitor (source) select global_name from global_name;', sysdate, 'sysdate+60/(60*60*24)');commit;end;/
scott@targetdb> variable jobno number;
scott@targetdb>begindbms_job.submit(:jobno, 'delete from scott.heartbeat_monitor where last_update < sysdate -2 and source = (select global_name from global_name);', sysdate, 'sysdate+60/(60)');commit;end;/
Now to track the heartbeat, we can use following script
select source, to_char(systimestamp,'dd-mon-yyyy hh24:mi:ss.ff5') as "current time",to_char(max(last_update),'dd-mon-yyyy hh24:mi:ss.ff5') as "last heartbeat received"from scott.heartbeat_monitorwhere source <> (select global_name from global_name) group by source/

Wednesday, May 27, 2009

streams apply troubleshoot

Apply
#Sharing the apply
#For every apply process the apply order will be mantained,
#you'll likely use a single apply process for a schema

--here we create the apply process on destination
--ADD the capture rules using the DBMS_STREAMS.ADD_TABLE_RULES procedure, see above

begin
--make your reflections on this value, see DBA_APPLY_PARAMETERS for existing values
DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_TEST',parameter => 'disable_on_error', value => 'y');
end;

BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'apply_test');
END;



Apply troubleshooting

Apply Parameters
select * from DBA_APPLY_PARAMETERS

Apply rules
select DBA_RULES.* from dba_apply, DBA_RULE_SET_RULES, DBA_RULES
where DBA_RULE_SET_RULES.rule_set_name (+)= dba_apply.rule_set_name
and DBA_RULES.rule_name (+)= DBA_RULE_SET_RULES.rule_name

Apply status
SELECT dba_apply.*, sysdate FROM DBA_apply where apply_name = 'APPLY_TEST'
select * from dba_rules where upper(rule_condition) like upper('%TESTONA%')

How to delete it?
execute DBMS_APPLY_ADM.STOP_APPLY(apply_name => 'apply_test');
execute DBMS_APPLY_ADM.DROP_APPLY(apply_name => 'apply_test');

BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'apply_test');
END;

Errors in apply? Is the apply process running? Remember to re-execute pending errors before restarting the apply
--First check sequence
select * from dba_apply
select * from dba_apply_error
execute DBMS_APPLY_ADM.EXECUTE_All_ERRORS;
select * from dba_apply_error
execute DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_MTS');
select * from DBA_APPLY_PROGRESS

#Check the handler exists
SELECT * FROM DBA_APPLY_DML_HANDLERS WHERE APPLY_DATABASE_LINK IS NULL ORDER BY OBJECT_OWNER, OBJECT_NAME;

#What is on alert log if the apply stop due to an exception?
Wed Aug 27 01:01:45 2008
Streams Apply Reader AS05 for APPLY_TEST2 with pid=38 OS id=12691 stopped

#You also find a trace in bdump in the form al12_as02_10503.trc _as##_####.trc
#You may resubmit the failed apply using:
begin
DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '24.21.1447');
end;
#or
execute DBMS_APPLY_ADM.EXECUTE_All_ERRORS;

#The apply process stopped for a full tbs, now the tbs is ok how many messages are now to be dequeued?
SELECT s.SUBSCRIBER_NAME, q.QUEUE_SCHEMA, q.QUEUE_NAME, s.LAST_DEQUEUED_SEQ,
s.NUM_MSGS, s.TOTAL_SPILLED_MSG
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND s.SUBSCRIBER_ADDRESS IS NULL AND s.SUBSCRIBER_NAME = a.APPLY_NAME;
subscriber_namequeue_schemaqueue_namelast_dequeued_seqnum_msgstotal_spilled_msgAPPLY_TEST2STRMADMINSTREAMS_QUEUE1834350APPLY_TESTSTRMADMINSTREAMS_QUEUE18900
#Tere are 43 messages to dequeue

#If the apply process as stopped due to an exception
#YOU MUST EXECUTE ERRORS BEFORE STARTING IT AGAIN
#the error queue must be be empty oterwhise new changes are applied and lead to incinsistency

#Data on destination has been manually repaired, how to clear to error queue?
BEGIN
--CAUTION!!! Any pending LCR will be deleted
delete from SYS.apply$_error;
commit;
execute immediate('alter system flush shared_pool');
DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_TEST');
END;

#Is there an apply history(10g)?
select * from DBA_HIST_STREAMS_APPLY_SUM

TRIGGER_FIRING_PROPERTY

http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/gen_rep.htm#STREP110

http://www.orafaq.com/forum/t/27737/0/

beginDBMS_DDL.SET_TRIGGER_FIRING_PROPERTY('sanjay','IRSTREAMS_TENDER_DTL_TBL'FALSE);end;/

Trigger Firing Property

Trigger Firing Property
You can control a DML or DDL trigger's firing property using the SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package. This procedure lets you specify whether a trigger's firing property is set to fire once. If a trigger's firing property is set to fire once, then it does not fire in the following cases: When a relevant change is made by an apply process When a relevant change results from the execution of one or more apply errors using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package If a trigger is not set to fire once, then it fires in both of these cases. By default, DML and DDL triggers are set to fire once. You can check a trigger's firing property by using the IS_TRIGGER_FIRE_ONCE function in the DBMS_DDL package. For example, in the hr schema, the update_job_history trigger adds a row to the job_history table when data is updated in the job_id or department_id column in the employees table. Suppose, in a Streams environment, the following configuration exists: A capture process captures changes to both of these tables at the dbs1.net database. A propagation propagates these changes to the dbs2.net database. An apply process applies these changes at the dbs2.net database. The update_job_history trigger exists in the hr schema in both databases. If the update_job_history trigger is not set to fire once at dbs2.net in this scenario, then these actions result: The job_id column is updated for an employee in the employees table at dbs1.net. The update_job_history trigger fires at dbs1.net and adds a row to the job_history table that records the change. The capture process at dbs1.net captures the changes to both the employees table and the job_history table. A propagation propagates these changes to the dbs2.net database. An apply process at the dbs2.net database applies both changes. The update_job_history trigger fires at dbs2.net when the apply process updates the employees table. In this case, the change to the employees table is recorded twice at the dbs2.net database: when the apply process applies the change to the job_history table and when the update_job_history trigger fires to record the change made to the employees table by the apply process. A database administrator might not want the update_job_history trigger to fire at the dbs2.net database when a change is made by the apply process. Similarly, a database administrator might not want a trigger to fire because of the execution of an apply error transaction. If the update_job_history trigger's firing property is set to fire once, then it does not fire at dbs2.net when the apply process applies a change to the employees table, and it does not fire when an executed error transaction updates the employees table. Also, if you use the ON SCHEMA clause to create a schema trigger, then the schema trigger fires only if the schema performs a relevant change. Therefore, when an apply process is applying changes, a schema trigger that is set to fire always fires only if the apply user is the same as the schema specified in the schema trigger. If the schema trigger is set to fire once, then it never fires when an apply process applies changes, regardless of whether the apply user is the same as the schema specified in the schema trigger. For example, if you specify a schema trigger that always fires on the hr schema at a source database and destination database, but the apply user at a destination database is strmadmin, then the trigger fires when the hr user performs a relevant change on the source database, but the trigger does not fire when this change is applied at the destination database. However, if you specify a schema trigger that always fires on the strmadmin schema at the destination database, then this trigger fires whenever a relevant change is made by the apply process, regardless of any trigger specifications at the source database.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/gen_rep.htm#STREP110

Trigger Firing Property

Trigger Firing Property
You can control a DML or DDL trigger's firing property using the SET_TRIGGER_FIRING_PROPERTY procedure in the DBMS_DDL package. This procedure lets you specify whether a trigger's firing property is set to fire once. If a trigger's firing property is set to fire once, then it does not fire in the following cases: When a relevant change is made by an apply process When a relevant change results from the execution of one or more apply errors using the EXECUTE_ERROR or EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package If a trigger is not set to fire once, then it fires in both of these cases. By default, DML and DDL triggers are set to fire once. You can check a trigger's firing property by using the IS_TRIGGER_FIRE_ONCE function in the DBMS_DDL package. For example, in the hr schema, the update_job_history trigger adds a row to the job_history table when data is updated in the job_id or department_id column in the employees table. Suppose, in a Streams environment, the following configuration exists: A capture process captures changes to both of these tables at the dbs1.net database. A propagation propagates these changes to the dbs2.net database. An apply process applies these changes at the dbs2.net database. The update_job_history trigger exists in the hr schema in both databases. If the update_job_history trigger is not set to fire once at dbs2.net in this scenario, then these actions result: The job_id column is updated for an employee in the employees table at dbs1.net. The update_job_history trigger fires at dbs1.net and adds a row to the job_history table that records the change. The capture process at dbs1.net captures the changes to both the employees table and the job_history table. A propagation propagates these changes to the dbs2.net database. An apply process at the dbs2.net database applies both changes. The update_job_history trigger fires at dbs2.net when the apply process updates the employees table. In this case, the change to the employees table is recorded twice at the dbs2.net database: when the apply process applies the change to the job_history table and when the update_job_history trigger fires to record the change made to the employees table by the apply process. A database administrator might not want the update_job_history trigger to fire at the dbs2.net database when a change is made by the apply process. Similarly, a database administrator might not want a trigger to fire because of the execution of an apply error transaction. If the update_job_history trigger's firing property is set to fire once, then it does not fire at dbs2.net when the apply process applies a change to the employees table, and it does not fire when an executed error transaction updates the employees table. Also, if you use the ON SCHEMA clause to create a schema trigger, then the schema trigger fires only if the schema performs a relevant change. Therefore, when an apply process is applying changes, a schema trigger that is set to fire always fires only if the apply user is the same as the schema specified in the schema trigger. If the schema trigger is set to fire once, then it never fires when an apply process applies changes, regardless of whether the apply user is the same as the schema specified in the schema trigger. For example, if you specify a schema trigger that always fires on the hr schema at a source database and destination database, but the apply user at a destination database is strmadmin, then the trigger fires when the hr user performs a relevant change on the source database, but the trigger does not fire when this change is applied at the destination database. However, if you specify a schema trigger that always fires on the strmadmin schema at the destination database, then this trigger fires whenever a relevant change is made by the apply process, regardless of any trigger specifications at the source database.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/gen_rep.htm#STREP110

Tuesday, May 26, 2009

Grant select,insert privileges

grant select,insert,delete.update on username.tablename to username;

grant privileges

REVOKE SELECT ON "BV1TO1"."IREPS_NIT_HEADER" FROM "SANJAY"GRANT SELECT ON "BV1TO1"."IREPS_NIT_HEADER" TO "SANJAY" WITH GRANT OPTION

oracle pl/sql link

http://www.java2s.com/Code/Oracle/CatalogOracle.htm

V$SESSION dynamic contains all the currently active users' usernames and the SQL they are executing in the database.

V$SESSION dynamic contains all the currently active users' usernames and the SQL they are executing in the database.

SQL>
SQL> select a.username,
2 s.sql_text
3 from v$session a,v$sqltext s
4 where a.sql_address = s.address
5 and a.sql_hash_value = s.hash_value
6 AND A.STATUS='ACTIVE'
7 order by a.username,a.sid,s.piece
8 /
USERNAME
------------------------------
SQL_TEXT
--------------------------------------------------
JAVA2S
select a.username, s.sql_text from v$session a,v$s
qltext s where

JAVA2S
a.sql_address = s.address and a.sql_hash_value =
s.hash_value A

JAVA2S
ND A.STATUS='ACTIVE' order by a.username,a.sid,s.p
iece

JAVA2S
DECLARE v_MyNumber NUMBER := 0; BEGIN LOOP
IF v_

JAVA2S
MyNumber = 7 THEN EXIT; END IF;
v_MyNumber


USERNAME
------------------------------
SQL_TEXT
--------------------------------------------------
JAVA2S
:= v_MyNumber + 2; END LOOP; END;


6 rows selected.

SQL>
SQL> --

Query dba_updatable_columns



SQL>
SQL> set echo off
SQL> set verify off
SQL> set linesize 72
SQL> set pagesize 9999
SQL> set feedback on
SQL>
SQL> column view_name format a30
SQL> column column_name format a30
SQL> column i format a1
SQL> column u format a1
SQL> column d format a1
SQL>
SQL> clear breaks
breaks cleared
SQL> break on view_name
SQL>
SQL> select table_name view_name,
2 column_name,
3 decode( insertable, 'YES', null, '*' ) i,
4 decode( updatable, 'YES', null, '*' ) u,
5 decode( deletable, 'YES', null, '*' ) d
6 from dba_updatable_columns
7 where rownum <>
SQL> prompt

SQL> prompt '*' indicated action not permitted.
'*' indicated action not permitted.
SQL> prompt

SQL>
SQL> --

Query user_triggers

SQL>
SQL>
SQL> select trigger_name, trigger_type,
2 triggering_event, trigger_body
3 from user_triggers
4 where rownum <>

Query user_triggers with trigger name

SQL>
SQL>
SQL> CREATE TABLE employees
2 ( employee_id number(10) not null,
3 last_name varchar2(50) not null,
4 email varchar2(30),
5 hire_date date,
6 job_id varchar2(30),
7 department_id number(10),
8 salary number(6),
9 manager_id number(6)
10 );

Table created.

SQL>
SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary,department_id ,manager_id)
2 values ( 1001, 'Lawson', 'lawson@g.com', '01-JAN-2002','MGR', 30000,1 ,1004);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id ,manager_id)
2 values ( 1002, 'Wells', 'wells@g.com', '01-JAN-2002', 'DBA', 20000,2, 1005 );

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id ,manager_id)
2 values( 1003, 'Bliss', 'bliss@g.com', '01-JAN-2002', 'PROG', 24000,3 ,1004);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id, manager_id)
2 values( 1004, 'Kyte', 'tkyte@a.com', SYSDATE-3650, 'MGR',25000 ,4, 1005);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id, manager_id)
2 values( 1005, 'Viper', 'sdillon@a .com', SYSDATE, 'PROG', 20000, 1, 1006);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id,manager_id)
2 values( 1006, 'Beck', 'clbeck@g.com', SYSDATE, 'PROG', 20000, 2, null);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id, manager_id)
2 values( 1007, 'Java', 'java01@g.com', SYSDATE, 'PROG', 20000, 3, 1006);

1 row created.

SQL>
SQL> insert into employees( employee_id, last_name, email, hire_date, job_id, salary, department_id, manager_id)
2 values( 1008, 'Oracle', 'wvelasq@g.com', SYSDATE, 'DBA', 20000, 4, 1006);

1 row created.

SQL>
SQL>
SQL>
SQL> create or replace trigger MyTrigger
2 before insert or update
3 of department_id
4 on employees
5 referencing old as old_value
6 new as new_value
7 for each row
8 when ( new_value.department_id <> 80 )
9 begin
10 :new_value.commission_pct := 0;
11 end;
12 /

Warning: Trigger created with compilation errors.

SQL>
SQL>
SQL> select trigger_type, triggering_event, when_clause, trigger_body
2 from user_triggers
3 where trigger_name = 'MYTRIGGER';

TRIGGER_TYPE TRIGGERING_EVENT
---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------
WHEN_CLAUSE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

TRIGGER_BODY
--------------------------------------------------------------------------------
BEFORE EACH ROW INSERT OR UPDATE
new_value.department_id <> 80
begin
:new_value.commission_pct := 0;
end;


SQL>
SQL> drop table employees;

Table dropped.

SQL>

Restore trigger create statement from user_trigger table
select
2 'create or replace trigger "' || trigger_name || '"' || chr(10)||
3 decode( substr( trigger_type, 1, 1 ),'A', 'AFTER', 'B', 'BEFORE', 'I', 'INSTEAD OF' ) ||chr(10) ||triggering_event || chr(10) ||
4 'ON "' || table_owner || '"."' || table_name || '"' || chr(10) ||
5 decode( instr( trigger_type, 'EACH ROW' ), 0, null,'FOR EACH ROW' ) || chr(10) ,
6 trigger_body
7 from user_triggers
8 where trigger_name = upper('YourTriggerName')
9 and rownum <>
SQL> --

Here's a SQL statement that shows all the tables in a given schema and whether they are candidates for the SHRINK clause:

Here's a SQL statement that shows all the tables in a given schema and whether they are candidates for the SHRINK clause:

SELECT dt.owner, dt.table_name,
(CASE
WHEN NVL(ind.cnt, 0) < 1 THEN 'Y'
ELSE 'N'
END) AS can_shrink
FROM dba_tables dt,
(SELECT table_name, COUNT(*) cnt
FROM dba_indexes di
WHERE index_type LIKE 'FUNCTION-BASED%'
GROUP BY table_name) ind
WHERE dt.table_name = ind.table_name(+)
AND dt.table_name NOT LIKE 'AQ$%'
AND dt.table_name NOT LIKE 'BIN$%'
AND dt.owner = 'REGDATA'
ORDER BY 1, 2

Oracle log files : An introduction checkpoint

Oracle log files : An introduction

The Oracle server maintains the redo Oracle log files to minimize the loss of data in the Database in case of an uncontrolled shutdown.

Online redo Oracle log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.

For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the rollback segment data block, and the transaction table of the rollback segments.

The question here is how are the Oracle log files maintained, and what information do we have?

A couple of interesting Oracle views:

a)To view information on log files:

SELECT * FROM v$log;

b)To view information on log file history:

SELECT thread#, first_change#,
TO_CHAR(first_time,'MM-DD-YY HH12:MIPM'),
next_change#
FROM v$log_history;

The above shows you what log state your system is in. Read more about ARCHIVELOG in the article on Oracle Backup.

Consider the parameters that can limit the number of online redo Oracle log files before setting up or altering the configuration of an instance's online redo log.

The following parameters limit the number of online redo Oracle log files that you can add to a database:

  1. The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of online redo Oracle log files for each database.

Group values can range from 1 to MAXLOGFILES.

The only way to override this upper limit is to re-create the database or its control file. Thus, it is important to consider this limit before creating a database.

If MAXLOGFILES is not specified for the CREATE DATABASE statement, Oracle uses an operating system specific default value. The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group.

As with MAXLOGFILES, the only way to override this upper limit is to re-create the database or control file. Thus, it is important to consider this limit before creating a database.

If no MAXLOGMEMBERS parameter is specified for the CREATE DATABASE statement, Oracle uses an operating system default value.

At any given time, Oracle uses only one of the online redo log files to store redo records written from the redo log buffer.

The online redo log file that Log Writer (LGWR) is actively writing to is called the current online redo log file. Online redo Oracle log files that are required for instance recovery are called active online redo log files. Online redo log files that are not required for instance recovery are called inactive.

If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an active online log file until ARCn has archived its contents.

If archiving is disabled (NOARCHIVELOG mode), then the last online redo log file fills writing continues by overwriting the first available active file. The best way to determine the appropriate number of online redo log files for a database instance is to test different configurations.

The optimum configuration has the fewest groups possible without hampering LGWR's writing redo log information.

In some cases, a database instance may require only two groups. In other situations, a database instance may require additional groups to guarantee that a recycled group is always available to LGWR.

During testing, the easiest way to determine if the current online redo log configuration is satisfactory is to examine the contents of the LGWR trace file and the database's alert log.

If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.

LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills, LGWR begins writing to the next available online redo log file.

When the last available online redo log file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again. The numbers next to each line indicate the sequence in which LGWR writes to each online redo log file.

Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled or disabled:

· If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the datafiles.

· If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.

Operations on Oracle log files :

  1. Forcing log file switches:
    ALTER SYSTEM switch logfile;
    or
    ALTER SYSTEM checkpoint;
  2. Clear A Log File If It Has Become Corrupt:
    ALTER DATABASE CLEAR LOGFILE GROUP group_number;
  3. This statement overcomes two situations where dropping redo logs is not possible: If there are only two log groups and if the corrupt redo log file belongs to the current group:
    ALTER DATABASE CLEAR LOGFILE GROUP 4;
  4. Clear A Log File If It Has Become Corrupt And Avoid Archiving:
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP group_number;
  5. Use this version of clearing a log file if the corrupt log file has not been archived:
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
  6. Privileges Related To Managing Log Files:
    ALTER DATABASE
    ALTER SYSTEM
  7. Init File Parameters Related To Log Files:
    log_checkpoint_timeout ... set to 0
  8. Managing Log File Members:
    ALTER DATABASE
    ADD LOGFILE MEMBER 'log_member_path_and_name'
    TO GROUP group_number;
  9. Adding log file group members:
    ALTER DATABASE
    ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;
  10. Droping log file group members:
    ALTER DATABASE
    DROP LOGFILE MEMBER log_member_path_and_name';
    ALTER DATABASE
    DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
  11. To create a new group of online redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause:

The following statement adds a new group of redo Oracle log files to the database:

ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/
oracle/dbs/log2c.rdo') SIZE 500K;

Return from Oracle log files to Oracle DBA

Introduction :

I noticed mostly DBAs has a confusion regarding redo log switch and checkpoint. With this articles i tried to explain the concept of checkpoint.

· What is the Role of checkpoint ?

· When checkpoint occurs ?

· What should be frequency of checkpoint ?

· How do we set the value of LOG_CHECKPOINT_INTERVAL ?

· What does the LOG_CHECKPOINT_TIMEOUT value means ?

· What is the significance of LOG_CHECKPOINT_TO_ALERT ?

· "checkpoint not complete ".

What is the Role of checkpoint ?

Checkpoint synchronizes the datablock in memory with the datafiles on disk. When a checkpoint occurs , DBWR writes all modified database blocks from the buffer cache to datafiles.

(BUFFER CACHE)

DBWR -----------------------------------> DATAFILES

[ Modified database blocks ]

LGWR also updates both the controlfile and datafiles to indicate last SCN (checkpoint)

(Record Last SCN no)

LGWR -----------------------------------> CONTROLFILE & DATAFILES

CKPT process performs the operation of LGWR if it is enabled. CKPT is enable as default in version 8.0

When checkpoint occurs ?

Checkpoint occurs

1) At redo log switch

2) LOG_CHECKPOINT_TIMEOUT has expired

3) LOG_CHECKPOINT_INTERVAL has reached.

4) Forcefully by DBA

What should be frequency of checkpoint ?

Depending on the number of datafiles in a database, a checkpoint can be highly resource intensive. More number of checkpoint reduce recovery time at the time of crash since less redo need to be reapplied , but causes an impact on performance because of system overhead.

In short if downtime is very vary critical , checkpoint should be raised frequently otherwise your goal should be to reduce its frequency to enhance the better performance.

How do we set the value of LOG_CHECKPOINT_INTERVAL ? :

Log_checkpoint_interval is defined in terms of OS Block size.

Suppose the following setting is defined in your database environment :-

LOG_CHECKPOINT_INTERVAL = 10000

OS block size = 512 bytes

Redo log file Size = 20M

Calculation of checkpoint = (20 * 1024*1024)/(512*10000) = 4

Four checkpoint will be raised per redo log file . One checkpoint will be raised when (10000*512) bytes has to be written by the LGWR from cache to redo log files.

If you will define log_checkpoint_interval = 0 (zero) means you are setting this value to infinity. and causes the parameter to be ignored.

If the value of the (log_checkpoint_interval * OS BLOCKSIZE) > redolog file size , checkpoint will occur at the redolog switch

You can set this parameter dynamically by alter system as

SQL> ALTER SYSTEM SET LOG_CHECKPOINT_INTERVAL=100000;

What does the LOG_CHECKPOINT_TIMEOUT value means ?

Log_checkpoint_timeout value is define in terms of seconds in initsid.ora file.

For example LOG_CHECKPOINT_TIMEOUT = 1800 # 1800 sec default value in 8i and 9i enterprise edition #

  In Oracle 9i ,the checkpoint raised from the position where the last write to the  redo log was   1800 seconds (defined as above)  ago
. It also points that no  buffer will remain dirty for more than 1800 seconds. 

In Oracle 8i or earlier versions , checkpoint raises based upon the number of seconds that have passed since the last checkpoint.

For example if you define LOG_CHECKPOINT_TIMEOUT = 180 causes checkpoint to be raised after every 180 seconds.

Setting this value to 0 disable the parameter.

You can set this parameter dynamically by alter system as

SQL> ALTER SYSTEM SET LOG_CHECKPOINT_TIMEOUT = 2400;

What is the significance of LOG_CHECKPOINT_TO_ALERT ?

Log_checkpoint_to_alert parameter is boolean type and its default value is FALSE. By setting this parameter to TRUE allows you to log checkpoint start and stop times in the alert log to determine the checkpoints activity.

Somtimes there is an error in an alert log file "checkpoint not complete ".What does it indicate ?

It means Oracle is ready to recycle the redo logs but it can not because the checkpoint in the previous log is still in progress

Also you can query the v$sysstat system view to determine the value of
background_checkpoint_completed
and background_checkpoint_started as

SQL > select name,value from sys.v$sysstate where name like 'background checkpoints%';

if the difference of these two values are grater than 1 you must do

1) Add more redo log groups or increase the size of redo logs.

2) Reduce the frequency of checkpoints by increasing LOG_CHECKPOINT_INTERVAL.

Conclusion :

Checkpoint plays a very important role in database activity. So DBAs must have to monitor its efficiency during peak database activities. Log switches cause a checkpoint, but checkpoint does not cause a log switch

You can send emails from an Oracle database via UTL_SMTP

You can send emails from an Oracle database via UTL_SMTP.

Here's a simple example:

declare
mail_conn UTL_SMTP.CONNECTION;
BEGIN
mail_conn := utl_smtp.open_connection('yourmailserver.com',25);
utl_smtp.helo(mail_conn, 'yourmailserver.com');
utl_smtp.mail(mail_conn, 'youremail@yourdomain.com');
utl_smtp.rcpt(mail_conn, 'youremail@yourdomain.com');
utl_smtp.data(mail_conn, 'hello');
utl_smtp.quit(mail_conn);
END;
/

restore database preview and validate


The ability to foresee a disaster helps us in planning for it and the ability to foresee our preparedness for a disaster can literally make the difference between survival and extinction.

RMAN’s PREVIEW and VALIDATE commands if used on a regular basis (daily) can prove to be immensely useful to foresee if, a recovery will actually succeed, thereby enabling us to rectify the possibilities of failure well in advance.

PREVIEW

Description: The preview option of the restore command helps you identify all the required backups for the specified restore command(examples is usage section). Preview displays a list of all available backups needed for a restore operation. If used in the SUMMARY mode, it produces a summary report for the restore command operation.

Usage:

RMAN> restore database preview;

RMAN> restore database from tag FULL_BKP preview;

RMAN> restore datafile 1, 2 preview;

RMAN> restore archivelog all preview summary;

RMAN> restore archivelog from time ’sysdate – 1/24′ preview summary;

RMAN> restore archivelog from scn 25 preview summary;

VALIDATE

Description: VALDIATE can be used as an option on the restore command or as a command by itself (usage given below). The purpose of RMAN validation is to check for block corruption (structural) and missing backup-sets. By default ‘validate’ checks for Structural corruption but can be used to identify logical corruption by specifying CHECK LOGICAL clause on the RESTORE/ VALIDATE command.

Usage:

RMAN> restore database validate check logical;

RMAN> restore database validate;

RMAN> restore database from tag FULL_BKP validate;

RMAN> restore datafile 1 validate;

RMAN> restore archivelog all validate;

RMAN> restore controlfile validate;

RMAN> restore tablespace users validate;

—————————————————————

RMAN> validate backupset 112 check logical;

Use the RMAN> list backup; command to obtain the backupset key (112 above)

RMAN> validate database check logical;

RMAN> validate database;

All relevant information, as and when encountered will be appended to this article. Cool Have fun.

Why do you run orainstRoot and ROOT.SH once you finalize the Installation

orainstRoot and root.sh

Why do you run orainstRoot and ROOT.SH once you finalize the Installation?

orainstRoot.sh needs to be run to change the Permissions and groupname to 770 and to dba.

Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in /etc/oratab or /opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to /usr/local/bin.

orainstRoot.sh

[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory to 770.

Changing groupname of /u01/app/oraInventory to dba.

The execution of the script is complete

root.sh

[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

Glossary

buffer busy due to global cache

Buffer busy due to global cache is a wait event that is signaled when a process has to wait for a block to become available because another process is obtaining a resource for this block.

buffer busy waits

Buffer busy waits is a wait event that is signaled when a process cannot get a buffer because another process is using the buffer at that moment.

cache convert waits

The cache convert waits per transactions statistic is the total number of waits for all up-convert operations, such as global cache null to S, global cache null to X, and global cache S to X.

Cache Fusion

Cache Fusion allows the direct transfer of data blocks between instances by way of an interconnect without causing forced writes to disk. That is, when one instance needs a current or consistent-read copy of a data block from another instance for a query or DML operation, the holding instance can transmit the block directly into the cache of the requesting instance.

cache open waits

The cache open waits per transactions statistic is the total number of waits for global cache open S and global cache open X.

cluster

A set of instances that typically run on different nodes. Each instance coordinates with the others when accessing the shared database residing on disk.

Cluster Manager (CM)

Cluster Manager is an operating system-dependent component that discovers and tracks the membership state of nodes by providing a common view of membership across the cluster. The Cluster Manager also monitors process health. The Lock Monitor Process (LMON), a background process that monitors the health of the Global Cache Service (GCS), registers and de-registers from the CM. The CM also manages recovery from any network card or cable failures.

connection load balancing

A feature that balances the number of active connections among various instances and shared server dispatchers for the same service. Because of service registration's ability to register with remote listeners, a listener is always aware of all instances and dispatchers. This way, a listener can send an incoming client request for a specific service to the least loaded instance and least loaded dispatcher regardless of its location.

connect-time failover

A client connect request is forwarded to another listener if the first listener is not responding. Connect-time failover is enabled by service registration, because the listener knows whether an instance is up prior to attempting a connection.

consistent gets

Consistent gets are the number of buffers that are obtained in consistent read (CR) mode.

consistent read

The Global Cache Service (GCS) ensures that a consistent read block (also known as the master copy data block) is maintained. The consistent read block is the master block version that holds all the changes. It is held in at least one System Global Area (SGA) in the cluster if the block is to be changed. If an instance needs to read the block, then the current version of the block can reside in many buffer caches as a shared resource. Thus, the most recent copy of the block in all System Global Areas contains all changes made to that block by all instances, regardless of whether any transactions on those instances have committed.

Console

The Oracle Enterprise Manager Console gives you a central point of control for the Oracle environment through an intuitive graphical user interface (GUI) that provides powerful and robust system management.

control file

A file that records the physical structure of a database and contains the database name, the names and locations of associated databases and online redo log files, the timestamp of the database creation, the current log sequence number, checkpoint information and various other records about the database's structure and health.

CR blocks received per transaction

The number of CR blocks shipped from the instance that has a block in exclusive access mode to the instance requesting a CR version of this block.

cr request retry

The cr request retry statistic is a wait that is incurred whenever Oracle re-submits a consistent read request when Oracle detects that the holding instance is no longer available.

data dependent routing

A method of routing data based on how the data is used within an application.

datafile

A file that contains the contents of logical database structures, such as tables and indexes. One or more datafiles form a logical unit of storage called a tablespace. A datafile can be associated with only one tablespace and only one database.

db block changes

Db block changes is a statistic that shows the number of current buffers obtained in exclusive mode for DML.

db block gets

db block gets is a statistic that shows the number of current buffers obtained for a read.

Database Writer (DBWn)

The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.

DBWR cross-instance writes

DBWR cross-instance writes (also known as forced writes) are the number of writes that an instance has to perform to disk to make a previously exclusively held block available for another instance to read into its buffer cache. DBWR cross-instance writes are practically eliminated with Cache Fusion, unless you specify a value greater than 0 (zero) for the GC_FILES_TO_LOCKS parameter.

dedicated server

A server that requires a dedicated server process for each user process. There is one server process for each client. Oracle Net sends the address of an existing server process back to the client. The client then resends its connect request to the server address provided. Contrast this with the shared server.

degree of parallelism (DOP)

The degree of parallelism specifies the number of processes, or threads, used in parallel operations. Each parallel process or thread can use one or two parallel execution processes depending on the SQL statement's complexity.

DFS Lock Handles

DFS Lock Handles are pointers to global resources. To perform operations on global enqueue service resources, the process first needs to acquire a DFS handle.

disk affinity

Disk affinity is the relationship between data on a disk and the instance that needs to access it. True disk affinity is only available in shared nothing disk configurations. This enables you to partition tablespaces across disks such that each partition is accessed by one and only one instance. The instance accessing the data on that disk has disk affinity.

dispatcher

A process that enables many clients to connect to the same server without the need for a dedicated server process for each client. A dispatcher handles and directs multiple incoming network session requests to shared server processes. See also shared server.

flow control messages sent

The number of flow-control (nullreq and nullack) messages that are sent by the LMS process.

flow control messages received

The number of flow-control (nullreq and nullack) messages received by the LMD process.

forced disk writes

Forced disk writes refer to the forced writing of a data block to disk by one instance when the data block is requested by another instance for a DML operation. Forced Writes are practically eliminated in Oracle9i with Cache Fusion, but they remain relevant if you specify 1:1 or 1:n releasable or fixed resources with the GC_FILES_TO_LOCKS parameter. In this case, Cache Fusion is disabled.

global cache bg acks

Global cache bg acks is a wait event that only can occur during startup or shutdown of an instance when the LMS process finalizes its operations.

global cache busy

The global cache busy statistic is a wait event that occurs whenever a session has to wait for an ongoing operation on the resource to complete.

global cache cr cancel wait

The global cache cr cancel wait statistic is a wait event that occurs whenever a session waits for the AST to complete for a canceled block access request. Cancelling the request is part of the Cache Fusion Write Protocol.

global cache converts

Global cache converts are resource converts of buffer cache blocks. This statistic is incremented whenever GCS resources are converted from Null to Exclusive, Shared to Exclusive, or Null to Shared.

global cache convert time

Global cache convert time is the accumulated time that all sessions require to perform global conversions on GCS resources.

global cache convert timeouts

Global cache convert timeouts are incremented whenever a resource operation times out.

global cache cr block flush time

Global cache cr block flush time is the time waited for a log flush when a CR request is served. Once LGWR has completed flushing the changes to a buffer that is on the log flush queue, LMS can send it. It is part of the serve time.

global cache cr blocks received

When a process requests a consistent read for a data block that is not in its local cache, it sends a request to another instance. Once the request is complete, in other words, the buffer has been received, Oracle increments the statistic.

global cache cr block receive time

The global cache cr block receive time statistic records the total time required for consistent read requests to complete. In other words, it records the accumulated round-trip time for all requests for consistent read blocks.

global cache cr blocks served

The global cache cr blocks served statistic is the number of requests for a consistent read block served by LMS. Oracle increments this statistic when the block is sent.

global cache cr block build time

The global cache cr block build time statistic is the time that the LMS process requires to create a consistent read block on the holding instance

global cache cr block send time

The global cache cr block send time statistic is the time required by LMS to initiate a send of a consistent read block. For each request, timing begins when the block is sent and stops when the send has completed. This statistic only measures the time it takes to initiate the send; it does not measure the time elapsed before the block arrives at the requesting instance.

global cache cr cancel wait

Await event that occurs when a session waits for the acquisition interrupt to complete for a canceled CR request. Cancelling the CR request is part of the Cache Fusion write protocol.

global cache cr request

The global cache cr request statistic is a wait event that occurs whenever a process has to wait for a pending CR request to complete. The process waited for either shared access to a block to be granted before reading the block from disk into the cache, or it waited for the LMS of the holding instance to send the block.

global cache cr timeouts

The global cache cr timeouts statistic identifies a request for a consistent read block that has an excessive delay and that has timed out. This could be due to system performance problems, a slow interconnect network, or dropped network packets. The value of this statistic should always be 0 (zero).

global cache current block flush time

The global cache current block flush time statistic is the time it takes to flush the changes to a block to disk, otherwise known as a forced log flush, before the block is shipped to the requesting instance

global cache current block pin time

The global cache current block pin time statistic is the time it takes to the pin the current block before shipping it to the requesting instance. Pinning a block is necessary to disallow further changes to the block while it is prepared to be shipped to another instance.

global cache current blocks received

The global cache current blocks received statistic is the number of current blocks received from the holding instance over the interconnect.

global cache current block receive time

The global cache current block receive time statistic is the accumulated round-trip time for all requests for current blocks

global cache current block send time

The global cache current block send statistic is the time it takes to send the current block to the requesting instance over the interconnect.

global cache current blocks served

The global cache current blocks served statistic is the number of current blocks shipped to the requesting instance over the interconnect

global cache freelist wait

The global cache freelist wait statistic is a wait event that occurs when Oracle must wait after it detects that the local element free list is empty.

global cache freelist waits

The global cache freelist waits statistic is the number of times Oracle found the resource element free list empty.

global cache gets

The global cache gets statistic is the number of buffer gets that result in opening a new resource with the GCS.

global cache get time

The global cache get time statistic is the accumulated time of all sessions needed to open a GCS resource for a local buffer.

global cache initialization parameters

Global cache initialization parameters are initialization parameters that determine the size of the collection of global that protect the database buffers on all instances.


Note:

Manually setting GC_FILES_TO_LOCKS overrides the default resource control behavior in Real Application Clusters.

global cache null to S

The global cache null to S statistic is a wait event that occurs whenever a session has to wait for a resource conversion to complete.

global cache null to X

The global cache null to X statistic is a wait event that occurs whenever a session has to wait for this resource conversion to complete.

global cache open S

The global cache open S statistic is a wait event that occurs when a session has to wait for receiving permission for shared access to the requested resource.

global cache open X

The global cache open X statistic is a wait event that occurs when a session has to wait for receiving a exclusive access to the requested resource.

global cache S to X

The global cache S to X statistic is a wait event that occurs whenever a session has to wait for this resource conversion to complete.

global cache pending ast

The global cache pending ast statistic is a wait event that can occur when a process waits for an acquisition interrupt before Oracle closes a resource element.

global cache pred cancel wait

A wait event that occurs when a session must wait for the acquisition interrupt to complete for a canceled predecessor read request. Cancelling a predecessor read request is part of the Cache Fusion write protocol.

global cache retry prepare

The global cache retry prepare statistic is a wait event that occurs whenever Oracle fails to prepare a buffer for a consistent read or Cache Fusion request, and when Oracle cannot ignore or skip this failure.

Global Cache Service (GCS)

The Global Cache Service is the process that implements Cache Fusion. It maintains block modes for blocks in the global role and is responsible for block transfers among instances. The Global Cache Service accomplishes these tasks using background processes such as the Global Cache Service process (LMS) and the Global Enqueue Service process (GES).

Global Cache Service Processes (LMSn)

The Global Cache Service Processes (LMSn) handle remote Global Cache Service messages. Current Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSn processes varies depending on the amount of messaging traffic among nodes in the cluster. The LMSn processes handle the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, LMSn creates a consistent read version of the block and sends it to the requesting instance. LMSn also controls the flow of messages to remote instances.

global database name

The global database name is the full name of the database that uniquely identifies it from another database. The global database name is of the form database_name.database_domain, for example, sales.us.acme.com.

Global Enqueue Service (GES)

This service coordinates enqueues that are shared globally.

Global Enqueue Service Daemon (LMD)

The Global Enqueue Service Daemon (LMD) is the resource agent process that manages Global Enqueue Service resource requests. The LMD process also handles deadlock detection Global Enqueue Service requests. Remote resource requests are requests originating from another instance.

Global Enqueue Service Monitor (LMON)

The background Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage global resources. LMON manages instance and process expirations and the associated recovery for the Global Cache and Global Enqueue Services. In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS).

global lock async converts

The global lock async converts statistic is the number of resources that Oracle converted from an incompatible mode.

global lock sync gets

The global lock sync gets statistic is the number of GCS resources that Oracle must open synchronously. Sync gets are mostly for GES resources (for example, library cache resources).

global lock async gets

The global lock async gets statistic is the number of GES resources that Oracle must open asynchronously. Async gets are only used for GES resources and include the number of global cache gets.

global lock get time

The global lock get time statistic is the accumulated time for all GES resources that Oracle needed to open.

global lock sync converts

The global lock sync converts statistic is the number of GES resources that Oracle converted from an incompatible mode. Sync converts occur mostly for GES resources.

global lock convert time

The global lock convert time statistic is the accumulated time for all global lock sync converts and global lock async converts.

high water mark

The high water mark is the highest limit within a segment for which space has been allocated to store data blocks. When a commit executes, if the new limit is greater than the previous limit, the high water mark is updated.

hybrid database

A hybrid database is one that has both OLTP and Data Warehouse processing characteristics.

initialization parameter file

The initialization parameter file is a file with parameter settings that initialize the database (initdb_name.ora). In the case of Real Application Clusters, it initializes the instances within a cluster (initsid.ora). The default single initialization parameter file is known as SPFILE.ORA.

instance

For a Real Application Clusters database, each node within the cluster has an instance of the running Oracle9i software referencing the database.

When a database is started on a database server (regardless of the type of computer), Oracle allocates a memory area called the System Global Area (SGA) and starts one or more Oracle processes. This combination of the SGA and the Oracle processes is called an instance. The memory and processes of an instance efficiently manage the database's data and serve the database users. You can connect to any instance to access information within a Real Application Clusters database.

Each instance has unique Oracle System Identifier (SID), instance name, instance number, rollback segments, and thread ID.

instance groups

Use instance groups to limit the number of instances that participate in a parallel operation. You can create any number of instance groups, each consisting of one or more instances. You can then specify which instance group is to be used for any or all parallel operations. Parallel execution servers will only be used on instances that are members of the specified instance group.

instance name

Represents the name of the instance and is used to uniquely identify a specific instance when multiple instances share common service names. The instance name is identified by the INSTANCE_NAME parameter in the initialization parameter file. The instance name is identical to Oracle System Identifier (SID).

instance number

A number that associates extents of data blocks with particular instances. The instance number enables you to start up an instance and ensure that it uses the extents allocated to it for inserts and updates. This ensures that it does not use space allocated for other instances. The instance cannot use data blocks in another free list unless the instance is restarted with that instance number.

You can use various SQL options with the INSTANCE_NUMBER initialization parameter to associate extents of data blocks with instances.

The instance number is depicted by the INSTANCE_NUMBER parameter in the instance initialization file, initsid.ora.

interconnect

An interconnect is an arrangement of data paths that in the case of Real Application Clusters and Cache Fusion allows data to be sent between caches of disjoint nodes

Inter-Process Communication (IPC)

The inter-process communication layer is an operating system-dependent component that enables transfers of messages, consistent-read, and current versions of data blocks between instances on different nodes.

Lock Manager Servers (LMSn)

See Global Cache Service Processes (LMSn).

listener

The listener process is a separate process residing on the server that listens for incoming client connection requests and manages server traffic. The listener brokers the client request, handing the request to the server when the server is available. Every time a client (or server acting as a client) requests a network session with a server, a listener receives the actual request. If the client's information matches the listener's information, then the listener grants a connection to the server.

Lock Manager Daemon Process (LMD)

See Global Enqueue Service Daemon (LMD).

Lock Manager Server Process (LMS)

See Global Cache Service Processes (LMSn).

Lock Monitor Process (LMON)

See Global Enqueue Service Monitor (LMON).

load balancing

Load balancing is the even distribution of active database connections among instances. In the context of parallel execution, load balancing refers to the distribution of parallel execution server processes to spread work among the CPUs and memory resources.

lock buffers for read

The lock buffers for read statistic is the number of up-converts from Null to Shared.

lock gets per transaction

The lock gets per transaction statistic is the number of global lock sync gets and global lock async gets per transaction.

lock converts per transaction

The lock converts per transaction statistic is the number of global local sync converts and global lock async converts per transaction.

messages flow controlled

The number of messages intended to be sent directly but that are instead queued and delivered later by LMD/LMS.

messages received

The number of messages received by the LMD process.

messages sent directly

The number of messages sent directly by Oracle processes.

messages sent indirectly

The number of messages explicitly queued by Oracle processes.

Multi-threaded server (MTS)

See shared server.

Net8

See Oracle Net.

node

A node is machine where an instance resides.

operating system context switches

Operating system context switches occur when a thread's time allotment has elapsed, when a thread with a higher priority has become ready to run, or when a running thread needs to wait, for example, for I/O to complete.

operating system-dependent layer (OSD)

The operating system-dependent (OSD) layer is a software layer that consists of several software components developed either by vendors for UNIX platforms, or by Oracle for NT installations of the Oracle database. The OSD layer maps the key operating system/cluster-ware services required for operation of Real Application Clusters.

Oracle Data Gatherer

The Oracle Data Gatherer collects performance statistics for the Oracle Performance Manager. You must install the Oracle Data Gatherer on a node on your network.

Oracle Enterprise Manager

A system management tool that provides an integrated solution for centrally managing your heterogeneous environment. Oracle Enterprise Manager combines a graphical console, management server, Oracle Intelligent Agent, repository database, and tools to provide an integrated, comprehensive systems management platform for managing Oracle products.

Oracle Enterprise Manager Console

The Oracle Enterprise Manager Console is a suite of GUI tools that make up the Oracle Enterprise Manager product.

Oracle Intelligent Agent

The Oracle Intelligent Agent is a process that runs on each of the node that functions as the executor of jobs and events sent by the console by way of the Management Server. The Oracle Intelligent Agent ensures high availability since the agent can function regardless of the status of the Console or network connections.

Oracle Net

Oracle Net is the foundation of Oracle's family of networking products, allowing services and their applications to reside on different computers and communicate as peer applications. The main function of Oracle Net is to establish network sessions and transfer data between a client machine and a server or between two servers. Once a network session is established, Oracle Net acts as a data courier for the client and the server.

Oracle Parallel Server Management

See Server Management.

Oracle Performance Manager

Oracle Performance Manager is an add-on application for Oracle Enterprise Manager that offers a variety of tabular and graphic performance statistics for Real Application Clusters. The statistics represent the aggregate performance for all instances.

Oracle Real Application Clusters

See Real Application Clusters.

Oracle System Identifier (SID)

An Oracle System Identifier is a name that identifies a specific instance of a running pre-release 8.1 Oracle database. For a Real Application Clusters database, each node within the cluster has an instance referencing the database. The database name, specified by the DB_NAME parameter in the initdb_name.ora file, and unique thread ID make up each node's SID. The thread ID starts at 1 for the first instance in the cluster, and is incremented by 1 for the next instance, and so on.

For pre-release 8.1 databases, SID identified the database. The SID was included in the part of the connect descriptor in a tnsnames.ora file, and in the definition of the network listener in the listener.ora file.

Oracle9i Enterprise Edition

Oracle9i Enterprise Edition is an object-relational database management system (ORDBMS). It provides the applications and files to manage a database. All other Real Application Clusters components are layered on top of the Oracle9i Enterprise Edition.

parallel automatic tuning

Parallel automatic tuning automatically controls values for all parameters related to parallel execution. These parameters affect several aspects of server processing, namely, the degree of parallelism (DOP), the adaptive multi-user feature, and memory sizing. Initialize and automatically tune parallel execution by setting the initialization parameter PARALLEL_AUTOMATIC_TUNING to true.

parallel execution

Parallel execution refers to multiple processes operating together to complete a single database transaction. Parallel execution works on both single and multiple instance Oracle installations. Parallel execution is also referred to parallel query.

physical reads

The physical reads statistic is the number of disk reads that had to be performed when a request for a data block could not be satisfied from a local cache..

physical writes

The physical writes statistic is the number of write I/Os performed by the DBWn processes. This number includes the number of DBWR cross instance writes (forced writes) in Oracle9i when GC_FILES_TO_LOCKS is set. Setting GC_FILES_TO_LOCKS for a particular datafile will enable the use of the old ping protocol, and will not leverage the Cache Fusion architecture.

ping

Pings are actually forced disk writes, which were common in previous Oracle cluster software products. Pings occurred because a data block can only be modified by one instance at a time. Before Real Application Clusters, if one instance modifies a data block that another instance requires, then whether a forced disk write occurs depends on the type of request submitted for the block. If the requesting instance needs the block for modification, then the holding instance's resources on the data block must be converted accordingly. The first instance must write the block to disk before the requesting instance can read it. This constitutes a forced disk write to a block.

PMON process

PMON is a process monitor database process that performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources that the process was using. PMON also monitors dispatcher and server processes and restarts them if they have failed. As a part of service registration, PMON registers instance information with the listener.

raw devices

Raw devices are disks or partitions on disk drives that do not have a file system set up on them. Raw devices are used for Real Application Clusters since they enable the sharing of disks.

raw volumes

See raw devices.

Real Application Clusters

An architecture that allows multiple instances to access a shared database of datafiles. Real Application Clusters is also a software component that provides the necessary Real Application Clusters scripts, initialization files, and datafiles to make the Oracle9i Enterprise Edition a Real Application Clusters database.

Recovery Manager (RMAN)

RMAN is an Oracle tool that enables you to back up, copy, restore, and recover datafiles, control files, and archived redo logs. It is included with the Oracle server and does not require separate installation. You can invoke RMAN as a command line utility from the operating system (O/S) prompt or use the GUI-based Enterprise Manager Backup Manager.

redo log file

A redo log file is a file that contains a record of all changes made to data in the database buffer cache. If an instance failure occurs, then the redo log files are used to recover the modified data that was in memory.

remote instance undo block writes

The remote instance undo block writes statistic is the number of rollback segment undo blocks written to disk by DBWn as part of a forced write.

remote instance undo header writes

The remote instance undo header writes statistic is the number of rollback segment header blocks written to disk by DBWn as part of a forced write.

repository database

A repository database, such as that used by Oracle Enterprise Manager, is a set of tables in an Oracle database, to store data to manage Real Application Clusters environments. This database is separate from any shared Real Application Clusters database on the nodes.

reverse key indexes

Reverse key indexes reverse the bytes of each column indexed while keeping the column order. This avoids performance degradation in Real Application Clusters where index modifications concentrate on a small set of leaf blocks. Reversing the keys of You cannot use reverse key indexes for index range scans.

rollback segment

Rollback segments contain transactions to undo changes to data blocks for uncommitted transactions. Rollback segments also provide read consistency to roll back transactions and to recover the database. Each node typically has two rollback segments that are identified with a naming convention of RBSthread_id_rollback_number by the ROLLBACK_SEGMENTS parameter in the instance initialization file.

seed database

A seed database is a preconfigured, ready-to-use database that requires minimal user input to create.

Server Management

Server Management (SRVM) is a comprehensive, integrated system management solution for managing Real Application Clusters environments. Server Management enables you to manage multi-instance databases in heterogeneous environments. Server Management is part of the open client/server architecture of Oracle Enterprise Manager. In addition to managing cluster databases, Server Management enables you to schedule jobs, perform event management, monitor performance, and obtain statistics to tune Real Application Clusters databases.

service name

A service name is a logical representation of a database, which is the way a database is presented to clients. A database can be presented as multiple services and a service can be implemented as multiple database instances. The service name is a string that is the global database name, a name comprised of the database name (DB_NAME) and domain name (DB_DOMAIN), entered during installation or database creation.

If you are not sure what the global database name is, then you can obtain it from the combined values of the SERVICE_NAMES parameter in the initialization file.

service registration

Service registration is a feature by which the PMON process (or shared server Dispatcher processes when using shared server) automatically registers information with a listener. Because this information is registered with the listener, you do not need to configure the listener.ora file with this static information.

shared server

The shared server is a server configured to allow many user processes to share very few server processes. This means increases the number of users that can be supported. With shared server, many user processes connect to a dispatcher. The dispatcher directs multiple incoming network session requests to a common queue. An idle shared server process from a shared pool of server processes picks up a request from the queue. This means a small pool of server processes can serve a large amount of clients. Contrast this with dedicated server.

star schemas

Star schemas are query-centric schemas that when represented in a diagram have a fact table at the center. The fact table usually contains the data element that is central to queries operating against the schema. A fact table is often quite large and is surrounded by several dimension tables that contain data that are attributes of the data in the fact table. Star schemas simplify query development because it is intuitive as to how to join attributes in the dimension tables with the fact table data. Star schemas are best suited for data warehousing environments and are thus less useful for OTLP environments.

striping

Striping refers to the interleaving of a related block of data across disks. If you properly implement striping, then it reduces I/O and improves performance. Because striping software is operating system-dependent, rely on your vendor documentation to ensure proper installation and configuration. There are two primary methods of striping, single-user or multi-user. These terms describe the type of environments in which each type of striping is most beneficial. The latter is commonly implemented for Real Application Clusters. With multi-user striping, the performance improvement is due to simultaneous disk arm movements reading related data on multiple hard drives. The degree to which average disk access time improves is proportional to the number of drives.

System Change Number (SCN)

System change numbers uniquely identify a committed transaction and the changes it makes. Within Real Application Clusters, system change numbers must not only be maintained within an instance, but they must also be synchronized across all instances with a cluster.

System Global Area (SGA)

The System Global Area is a group of shared memory structures that contain data and control information for an Oracle instance.

tablespace

A tablespace is a logical portion of an Oracle database used to allocate storage for table and index data. Each tablespace corresponds to one or more physical datafiles. Every Oracle database has a tablespace called SYSTEM and can have additional tablespaces. A tablespace is used to group related logical structures. For example, tablespaces commonly group all of an application's objects to simplify administrative operations.

Transmission Control Protocol/Interconnect Protocol (TCP/IP)

TCP/IP is a set of protocols that allow cooperating computers to share resources across a network.

thread ID

The thread ID is the number of a redo thread for an instance. Any available redo thread number can be used, but an instance cannot use the same thread number as another instance.

transaction monitor

A transaction monitor is a class of software products that provide a transaction execution layer above the operating system. Transaction monitors combine database updates and submit them to a database. In doing this, the transaction monitor manages some of the consistency and correctness of the database. The monitor ensures that the rules of transaction atomicity are adhered to; updates take place completely or not at all. The advantages of using transaction monitors include increased throughput.

transparent application failover (TAF)

Transparent application failover is a runtime failover mechanism for high-availability environments, such as Real Application Clusters and Oracle Real Application Clusters Guard, that refers to the failover and re-establishment of application-to-service connections. It allows client applications to automatically reconnect to the database if the connection fails, and optionally resume a SELECT statement that was in progress. This reconnect happens automatically from within the Oracle Call Interface (OCI) library.

User Datagram Protocol (UDP)

The User Datagram Protocol is a similar protocol to TCP/IP, however, it is simpler to administer. It is considered less reliable than TCP/IP because, for example, it does not guarantee message ordering.

User-mode IPC

User-mode IPC (Inter-process Communication) is an IPC-based protocol that directly accesses network hardware. As opposed to kernel-mode IPC, with user-mode IPC the protocol avoids the overhead of copying data into kernel space, making system calls, and incurring context switches.

Virtual Interface Architecture (VIA)

Virtual Interface Architecture is an implementation of user mode IPC.