Wednesday, September 29, 2010

Checkpoint Tuning

Determining the time to recover from an instance failure is a necessary component for reaching required service levelsagreements. For example, if service levels dictate that when a node fails, instance recovery time can be no more than 3 minutes, FAST_START_MTTR_TARGET should be set to 180

Fast-start checkpointing refers to the periodic writes by the database writer (DBWn) processes for the purpose of writing changed data blocks from the Oracle buffer cache to disk and advancing the thread-checkpoint. Setting the database parameter FAST_START_MTTR_TARGET to a value greater than zero enables the fast-start checkpointing feature.

Fast-start checkpointing should always be enabled for the following reasons:

It reduces the time required for cache recovery, and makes instance recovery time-bounded and predictable. This is accomplished by limiting the number of dirty buffers (data blocks which have changes in memory that still need to be written to disk) and the number of redo records (changes in the database) generated between the most recent redo record and the last checkpoint.

Fast-Start checkpointing eliminates bulk writes and corresponding I/O spikes that occure traditionally with interval- based checkpoints, providing a smoother, more consistent I/O pattern that is more predictable and easier to manage.
If the system is not already near or at its maximum I/O capacity, fast-start checkpointing will have a negligible impact on performance. Although fast-start checkpointing results in increased write activity, there is little reduction in database throughout, provided the system has sufficient I/O capacity.

Check-Pointing

Check-pointing is an important Oracle activity which records the highest system change number (SCN,) so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery.

As we are aware, instance and crash recovery occur in two steps - cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints.

Mean time to recover (MTTR)

Fast-start recovery can greatly reduce the mean time to recover (MTTR), with minimal effects on online application performance. Oracle continuously estimates the recovery time and automatically adjusts the check-pointing rate to meet the target recovery time.

With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. You no longer have to set any checkpoint-related parameters.

This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target.

The TARGET_MTTR field of V$INSTANCE_RECOVERY contains the MTTR target in effect. The ESTIMATED_MTTR field of V$INSTANCE_RECOVERY contains the estimated MTTR should a crash happen right away.

Enable MTTR advisory

Enabling MTTR Advisory Enabling MTTR Advisory involves setting two parameters:

STATISTICS_LEVEL = TYPICAL
FAST_START_MTTR_TARGET > 0

Estimate the value for FAST_START_MTTR_TARGET as follows:

SELECT TARGET_MTTR,
ESTIMATED_MTTR,
CKPT_BLOCK_WRITES
FROM V$INSTANCE_RECOVERY;

TARGET_MTTR ESTIMATED_MTTR CKPT_BLOCK_WRITES
----------- -------------- -----------------
214 12 269880

FAST_START_MTTR_TARGET = 214;

Whenever you set FAST_START_MTTR_TARGET to a nonzero value, then set the following parameters to 0.

LOG_CHECKPOINT_TIMEOUT = 0
LOG_CHECKPOINT_INTERVAL = 0
FAST_START_IO_TARGET = 0

Disable MTTR advisory

FAST_START_MTTR_TARGET = 0
LOG_CHECKPOINT_INTERVAL = 200000

Tuesday, September 28, 2010

Duplicate Oracle Database with RMAN

Overview
A powerful feature of RMAN is the ability to duplicate (clone), a database from a backup. It is possible to create a duplicate database on:

A remote server with the same file structure
A remote server with a different file structure
The local server with a different file structure
A duplicate database is distinct from a standby database, although both types of databases are created with the DUPLICATE command. A standby database is a copy of the primary database that you can update continually or periodically by using archived logs from the primary database. If the primary database is damaged or destroyed, then you can perform failover to the standby database and effectively transform it into the new primary database. A duplicate database, on the other hand, cannot be used in this way: it is not intended for failover scenarios and does not support the various standby recovery and failover options.

To prepare for database duplication, you must first create an auxiliary instance. For the duplication to work, you must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode.

So long as RMAN is able to connect to the primary and duplicate instances, the RMAN client can run on any machine. However, all backups, copies of datafiles, and archived logs used for creating and recovering the duplicate database must be accessible by the server session on the duplicate host.

As part of the duplicating operation, RMAN manages the following:

Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available backups and archived logs.

Shuts down and starts the auxiliary database.

Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs.

Generates a new, unique DBID for the duplicate database.

Preparing the Duplicate (Auxiliary) Instance for Duplication
Create an Oracle Password File

First we must create a password file for the duplicate instance.

export ORACLE_SID=APP2
orapwd file=orapwAPP2 password=manager entries=5 force=y

Ensure Oracle Net Connectivity to both Instances

Next add the appropriate entries into the TNSNAMES.ORA and LISTENER.ORA files in the $TNS_ADMIN directory.

LISTENER.ORA

APP1 = Target Database, APP2 = Auxiliary Database

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = APP1.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP1)
)
(SID_DESC =
(GLOBAL_DBNAME = APP2.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP2)
)
)

TNSNAMES.ORA

APP1 = Target Database, APP2 = Auxiliary Database

APP1.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP1.WORLD)
)
)

APP2.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP2.WORLD)
)
)

SQLNET.ORA

NAMES.DIRECTORY_PATH= (TNSNAMES)
NAMES.DEFAULT_DOMAIN = WORLD
NAME.DEFAULT_ZONE = WORLD
USE_DEDICATED_SERVER = ON

Now restart the Listener

lsnrctl stop
lsnrctl start

Create an Initialization Parameter File for the Auxiliary Instance

Create an INIT.ORA parameter file for the auxiliary instance, you can copy that from the target instance and then modify the parameters.

### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts

DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)

### Global database name is db_name.db_domain
### -----------------------------------------

db_name = APP2
db_unique_name = APP2_GENTIC
db_domain = WORLD
service_names = APP2
instance_name = APP2

### Basic Configuration Parameters
### ------------------------------

compatible = 10.2.0.4
db_block_size = 8192
db_file_multiblock_read_count = 32
db_files = 512
control_files = /u01/oracle/db/APP2/con/APP2_con01.con,
/opt/oracle/db/APP2/con/APP2_con02.con

### Database Buffer Cache, I/O
### --------------------------
# The Parameter SGA_TARGET enables Automatic Shared Memory Management

sga_target = 500M
sga_max_size = 600M

### REDO Logging without Data Guard
### -------------------------------

log_archive_format = APP2_%s_%t_%r.arc
log_archive_max_processes = 2
log_archive_dest = /u01/oracle/db/APP2/arc

### System Managed Undo
### -------------------

undo_management = auto
undo_retention = 10800
undo_tablespace = undo

### Traces, Dumps and Passwordfile
### ------------------------------

audit_file_dest = /u01/oracle/db/APP2/adm/admp
user_dump_dest = /u01/oracle/db/APP2/adm/udmp
background_dump_dest = /u01/oracle/db/APP2/adm/bdmp
core_dump_dest = /u01/oracle/db/APP2/adm/cdmp
utl_file_dir = /u01/oracle/db/APP2/adm/utld
remote_login_passwordfile = exclusive

Create a full Database Backup

Make sure that a full backup of the target is accessible on the duplicate host. You can use the following BASH script to backup the target database.

rman nocatalog target / <<-EOF
configure retention policy to recovery window of 3 days;
configure backup optimization on;
configure controlfile autobackup on;
configure default device type to disk;
configure device type disk parallelism 1 backup type to compressed backupset;
configure datafile backup copies for device type disk to 1;
configure maxsetsize to unlimited;
configure snapshot controlfile name to '/u01/backup/snapshot_controlfile';
show all;

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup full database noexclude
include current controlfile
format '/u01/backup/datafile_%s_%p.bak'
tag 'datafile_daily';
}

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup archivelog all
delete all input
format '/u01/backup/archivelog_%s_%p.bak'
tag 'archivelog_daily';
}

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup format '/u01/backup/controlfile_%s.bak' current controlfile;
}

crosscheck backup;
list backup of database;
report unrecoverable;
report schema;
report need backup;
report obsolete;
delete noprompt expired backup of database;
delete noprompt expired backup of controlfile;
delete noprompt expired backup of archivelog all;
delete noprompt obsolete recovery window of 3 days;
quit
EOF

Creating a Duplicate Database on the Local Host
Before beginning RMAN duplication, use SQL*Plus to connect to the auxiliary instance and start it in NOMOUNT mode. If you do not have a server-side initialization parameter file for the auxiliary instance in the default location, then you must specify the client-side initialization parameter file with the PFILE parameter on the DUPLICATE command.



Get original Filenames from TARGET

To rename the database files you can use the SET NEWNAME command. Therefore, get the original filenames from the target and modify these names in the DUPLICATE command.

ORACLE_SID=APP1
export ORACLE_SID

set feed off
set pagesize 10000
column name format a40 heading "Datafile"
column file# format 99 heading "File-ID"

select name, file# from v$dbfile;

column member format a40 heading "Logfile"
column group# format 99 heading "Group-Nr"

select member, group# from v$logfile;

Datafile File-ID
---------------------------------------- -------
/u01/oracle/db/APP1/sys/APP1_sys1.dbf 1
/u01/oracle/db/APP1/sys/APP1_undo1.dbf 2
/u01/oracle/db/APP1/sys/APP1_sysaux1.dbf 3
/u01/oracle/db/APP1/usr/APP1_users1.dbf 4

Logfile Group-Nr
---------------------------------------- --------
/u01/oracle/db/APP1/rdo/APP1_log1A.rdo 1
/opt/oracle/db/APP1/rdo/APP1_log1B.rdo 1
/u01/oracle/db/APP1/rdo/APP1_log2A.rdo 2
/opt/oracle/db/APP1/rdo/APP1_log2B.rdo 2
/u01/oracle/db/APP1/rdo/APP1_log3A.rdo 3
/opt/oracle/db/APP1/rdo/APP1_log3B.rdo 3
/u01/oracle/db/APP1/rdo/APP1_log4A.rdo 4
/opt/oracle/db/APP1/rdo/APP1_log4B.rdo 4
/u01/oracle/db/APP1/rdo/APP1_log5A.rdo 5
/opt/oracle/db/APP1/rdo/APP1_log5B.rdo 5
/u01/oracle/db/APP1/rdo/APP1_log6A.rdo 6
/opt/oracle/db/APP1/rdo/APP1_log6B.rdo 6
/u01/oracle/db/APP1/rdo/APP1_log7A.rdo 7
/opt/oracle/db/APP1/rdo/APP1_log7B.rdo 7
/u01/oracle/db/APP1/rdo/APP1_log8A.rdo 8
/opt/oracle/db/APP1/rdo/APP1_log8B.rdo 8
/u01/oracle/db/APP1/rdo/APP1_log9A.rdo 9
/opt/oracle/db/APP1/rdo/APP1_log9B.rdo 9
/u01/oracle/db/APP1/rdo/APP1_log10A.rdo 10
/opt/oracle/db/APP1/rdo/APP1_log10B.rdo 10

Create Directories for the duplicate Database

mkdir -p /u01/oracle/db/APP2
mkdir -p /opt/oracle/db/APP2
cd /opt/oracle/db/APP2
mkdir con rdo
cd /u01/oracle/db/APP2
mkdir adm arc con rdo sys tmp usr bck
cd adm
mkdir admp bdmp cdmp udmp utld

Create Symbolic Links to Password and INIT.ORA File

Oracle must be able to locate the Password and INIT.ORA File.

cd $ORACLE_HOME/dbs
ln -s /home/oracle/config/10.2.0/orapwAPP2 orapwAPP2
ln -s /home/oracle/config/10.2.0/initAPP2.ora initAPP2.ora

Duplicate the Database

Now you are ready to duplicate the database APP1 to APP2.

ORACLE_SID=APP2
export ORACLE_SID

sqlplus sys/manager as sysdba
startup force nomount pfile='/home/oracle/config/10.2.0/initAPP2.ora';
exit;

rman TARGET sys/manager@APP1 AUXILIARY sys/manager@APP2

Recovery Manager: Release 10.2.0.4.0 - Production on Tue Oct 28 12:00:13 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: APP1 (DBID=3191823649)
connected to auxiliary database: APP2 (not mounted)

RUN
{
SET NEWNAME FOR DATAFILE 1 TO '/u01/oracle/db/APP2/sys/APP2_sys1.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/u01/oracle/db/APP2/sys/APP2_undo1.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/u01/oracle/db/APP2/sys/APP2_sysaux1.dbf';
SET NEWNAME FOR DATAFILE 4 TO '/u01/oracle/db/APP2/usr/APP2_users1.dbf';
DUPLICATE TARGET DATABASE TO APP2
PFILE = /home/oracle/config/10.2.0/initAPP2.ora
NOFILENAMECHECK
LOGFILE GROUP 1 ('/u01/oracle/db/APP2/rdo/APP2_log1A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log1B.rdo') SIZE 10M REUSE,
GROUP 2 ('/u01/oracle/db/APP2/rdo/APP2_log2A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log2B.rdo') SIZE 10M REUSE,
GROUP 3 ('/u01/oracle/db/APP2/rdo/APP2_log3A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log3B.rdo') SIZE 10M REUSE,
GROUP 4 ('/u01/oracle/db/APP2/rdo/APP2_log4A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log4B.rdo') SIZE 10M REUSE,
GROUP 5 ('/u01/oracle/db/APP2/rdo/APP2_log5A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log5B.rdo') SIZE 10M REUSE,
GROUP 6 ('/u01/oracle/db/APP2/rdo/APP2_log6A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log6B.rdo') SIZE 10M REUSE,
GROUP 7 ('/u01/oracle/db/APP2/rdo/APP2_log7A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log7B.rdo') SIZE 10M REUSE,
GROUP 8 ('/u01/oracle/db/APP2/rdo/APP2_log8A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log8B.rdo') SIZE 10M REUSE,
GROUP 9 ('/u01/oracle/db/APP2/rdo/APP2_log9A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log9B.rdo') SIZE 10M REUSE,
GROUP 10 ('/u01/oracle/db/APP2/rdo/APP2_log10A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log10B.rdo') SIZE 10M REUSE;
}

The whole, long output is not shown here, but check, that RMAN was able to open the duplicate database with the RESETLOGS option.

.....
.....
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 28-OCT-08

As the final step, eliminate or uncomment the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT in the INIT.ORA file and restart the database.

initAPP2.ora

### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts
# DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
# LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)

sqlplus / as sysdba
shutdown immediate;
startup;

Total System Global Area 629145600 bytes
Fixed Size 1269064 bytes
Variable Size 251658936 bytes
Database Buffers 373293056 bytes
Redo Buffers 2924544 bytes
Database mounted.
Database opened.

Creating a Duplicate Database to Remote Host
This scenario is exactly the same as described for the local host. Copy the RMAN Backup files to the remote host on the same directory as on the localhost.



cd /u01/backup
scp gentic:/u01/backup/* .

The other steps are the same as described under «Creating a Duplicate Database on the Local Host».

Duplicate Oracle Database with RMAN

Overview
A powerful feature of RMAN is the ability to duplicate (clone), a database from a backup. It is possible to create a duplicate database on:

A remote server with the same file structure
A remote server with a different file structure
The local server with a different file structure
A duplicate database is distinct from a standby database, although both types of databases are created with the DUPLICATE command. A standby database is a copy of the primary database that you can update continually or periodically by using archived logs from the primary database. If the primary database is damaged or destroyed, then you can perform failover to the standby database and effectively transform it into the new primary database. A duplicate database, on the other hand, cannot be used in this way: it is not intended for failover scenarios and does not support the various standby recovery and failover options.

To prepare for database duplication, you must first create an auxiliary instance. For the duplication to work, you must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode.

So long as RMAN is able to connect to the primary and duplicate instances, the RMAN client can run on any machine. However, all backups, copies of datafiles, and archived logs used for creating and recovering the duplicate database must be accessible by the server session on the duplicate host.

As part of the duplicating operation, RMAN manages the following:

Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available backups and archived logs.

Shuts down and starts the auxiliary database.

Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs.

Generates a new, unique DBID for the duplicate database.

Preparing the Duplicate (Auxiliary) Instance for Duplication
Create an Oracle Password File

First we must create a password file for the duplicate instance.

export ORACLE_SID=APP2
orapwd file=orapwAPP2 password=manager entries=5 force=y

Ensure Oracle Net Connectivity to both Instances

Next add the appropriate entries into the TNSNAMES.ORA and LISTENER.ORA files in the $TNS_ADMIN directory.

LISTENER.ORA

APP1 = Target Database, APP2 = Auxiliary Database

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = APP1.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP1)
)
(SID_DESC =
(GLOBAL_DBNAME = APP2.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP2)
)
)

TNSNAMES.ORA

APP1 = Target Database, APP2 = Auxiliary Database

APP1.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP1.WORLD)
)
)

APP2.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP2.WORLD)
)
)

SQLNET.ORA

NAMES.DIRECTORY_PATH= (TNSNAMES)
NAMES.DEFAULT_DOMAIN = WORLD
NAME.DEFAULT_ZONE = WORLD
USE_DEDICATED_SERVER = ON

Now restart the Listener

lsnrctl stop
lsnrctl start

Create an Initialization Parameter File for the Auxiliary Instance

Create an INIT.ORA parameter file for the auxiliary instance, you can copy that from the target instance and then modify the parameters.

### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts

DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)

### Global database name is db_name.db_domain
### -----------------------------------------

db_name = APP2
db_unique_name = APP2_GENTIC
db_domain = WORLD
service_names = APP2
instance_name = APP2

### Basic Configuration Parameters
### ------------------------------

compatible = 10.2.0.4
db_block_size = 8192
db_file_multiblock_read_count = 32
db_files = 512
control_files = /u01/oracle/db/APP2/con/APP2_con01.con,
/opt/oracle/db/APP2/con/APP2_con02.con

### Database Buffer Cache, I/O
### --------------------------
# The Parameter SGA_TARGET enables Automatic Shared Memory Management

sga_target = 500M
sga_max_size = 600M

### REDO Logging without Data Guard
### -------------------------------

log_archive_format = APP2_%s_%t_%r.arc
log_archive_max_processes = 2
log_archive_dest = /u01/oracle/db/APP2/arc

### System Managed Undo
### -------------------

undo_management = auto
undo_retention = 10800
undo_tablespace = undo

### Traces, Dumps and Passwordfile
### ------------------------------

audit_file_dest = /u01/oracle/db/APP2/adm/admp
user_dump_dest = /u01/oracle/db/APP2/adm/udmp
background_dump_dest = /u01/oracle/db/APP2/adm/bdmp
core_dump_dest = /u01/oracle/db/APP2/adm/cdmp
utl_file_dir = /u01/oracle/db/APP2/adm/utld
remote_login_passwordfile = exclusive

Create a full Database Backup

Make sure that a full backup of the target is accessible on the duplicate host. You can use the following BASH script to backup the target database.

rman nocatalog target / <<-EOF
configure retention policy to recovery window of 3 days;
configure backup optimization on;
configure controlfile autobackup on;
configure default device type to disk;
configure device type disk parallelism 1 backup type to compressed backupset;
configure datafile backup copies for device type disk to 1;
configure maxsetsize to unlimited;
configure snapshot controlfile name to '/u01/backup/snapshot_controlfile';
show all;

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup full database noexclude
include current controlfile
format '/u01/backup/datafile_%s_%p.bak'
tag 'datafile_daily';
}

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup archivelog all
delete all input
format '/u01/backup/archivelog_%s_%p.bak'
tag 'archivelog_daily';
}

run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup format '/u01/backup/controlfile_%s.bak' current controlfile;
}

crosscheck backup;
list backup of database;
report unrecoverable;
report schema;
report need backup;
report obsolete;
delete noprompt expired backup of database;
delete noprompt expired backup of controlfile;
delete noprompt expired backup of archivelog all;
delete noprompt obsolete recovery window of 3 days;
quit
EOF

Creating a Duplicate Database on the Local Host
Before beginning RMAN duplication, use SQL*Plus to connect to the auxiliary instance and start it in NOMOUNT mode. If you do not have a server-side initialization parameter file for the auxiliary instance in the default location, then you must specify the client-side initialization parameter file with the PFILE parameter on the DUPLICATE command.



Get original Filenames from TARGET

To rename the database files you can use the SET NEWNAME command. Therefore, get the original filenames from the target and modify these names in the DUPLICATE command.

ORACLE_SID=APP1
export ORACLE_SID

set feed off
set pagesize 10000
column name format a40 heading "Datafile"
column file# format 99 heading "File-ID"

select name, file# from v$dbfile;

column member format a40 heading "Logfile"
column group# format 99 heading "Group-Nr"

select member, group# from v$logfile;

Datafile File-ID
---------------------------------------- -------
/u01/oracle/db/APP1/sys/APP1_sys1.dbf 1
/u01/oracle/db/APP1/sys/APP1_undo1.dbf 2
/u01/oracle/db/APP1/sys/APP1_sysaux1.dbf 3
/u01/oracle/db/APP1/usr/APP1_users1.dbf 4

Logfile Group-Nr
---------------------------------------- --------
/u01/oracle/db/APP1/rdo/APP1_log1A.rdo 1
/opt/oracle/db/APP1/rdo/APP1_log1B.rdo 1
/u01/oracle/db/APP1/rdo/APP1_log2A.rdo 2
/opt/oracle/db/APP1/rdo/APP1_log2B.rdo 2
/u01/oracle/db/APP1/rdo/APP1_log3A.rdo 3
/opt/oracle/db/APP1/rdo/APP1_log3B.rdo 3
/u01/oracle/db/APP1/rdo/APP1_log4A.rdo 4
/opt/oracle/db/APP1/rdo/APP1_log4B.rdo 4
/u01/oracle/db/APP1/rdo/APP1_log5A.rdo 5
/opt/oracle/db/APP1/rdo/APP1_log5B.rdo 5
/u01/oracle/db/APP1/rdo/APP1_log6A.rdo 6
/opt/oracle/db/APP1/rdo/APP1_log6B.rdo 6
/u01/oracle/db/APP1/rdo/APP1_log7A.rdo 7
/opt/oracle/db/APP1/rdo/APP1_log7B.rdo 7
/u01/oracle/db/APP1/rdo/APP1_log8A.rdo 8
/opt/oracle/db/APP1/rdo/APP1_log8B.rdo 8
/u01/oracle/db/APP1/rdo/APP1_log9A.rdo 9
/opt/oracle/db/APP1/rdo/APP1_log9B.rdo 9
/u01/oracle/db/APP1/rdo/APP1_log10A.rdo 10
/opt/oracle/db/APP1/rdo/APP1_log10B.rdo 10

Create Directories for the duplicate Database

mkdir -p /u01/oracle/db/APP2
mkdir -p /opt/oracle/db/APP2
cd /opt/oracle/db/APP2
mkdir con rdo
cd /u01/oracle/db/APP2
mkdir adm arc con rdo sys tmp usr bck
cd adm
mkdir admp bdmp cdmp udmp utld

Create Symbolic Links to Password and INIT.ORA File

Oracle must be able to locate the Password and INIT.ORA File.

cd $ORACLE_HOME/dbs
ln -s /home/oracle/config/10.2.0/orapwAPP2 orapwAPP2
ln -s /home/oracle/config/10.2.0/initAPP2.ora initAPP2.ora

Duplicate the Database

Now you are ready to duplicate the database APP1 to APP2.

ORACLE_SID=APP2
export ORACLE_SID

sqlplus sys/manager as sysdba
startup force nomount pfile='/home/oracle/config/10.2.0/initAPP2.ora';
exit;

rman TARGET sys/manager@APP1 AUXILIARY sys/manager@APP2

Recovery Manager: Release 10.2.0.4.0 - Production on Tue Oct 28 12:00:13 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: APP1 (DBID=3191823649)
connected to auxiliary database: APP2 (not mounted)

RUN
{
SET NEWNAME FOR DATAFILE 1 TO '/u01/oracle/db/APP2/sys/APP2_sys1.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/u01/oracle/db/APP2/sys/APP2_undo1.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/u01/oracle/db/APP2/sys/APP2_sysaux1.dbf';
SET NEWNAME FOR DATAFILE 4 TO '/u01/oracle/db/APP2/usr/APP2_users1.dbf';
DUPLICATE TARGET DATABASE TO APP2
PFILE = /home/oracle/config/10.2.0/initAPP2.ora
NOFILENAMECHECK
LOGFILE GROUP 1 ('/u01/oracle/db/APP2/rdo/APP2_log1A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log1B.rdo') SIZE 10M REUSE,
GROUP 2 ('/u01/oracle/db/APP2/rdo/APP2_log2A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log2B.rdo') SIZE 10M REUSE,
GROUP 3 ('/u01/oracle/db/APP2/rdo/APP2_log3A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log3B.rdo') SIZE 10M REUSE,
GROUP 4 ('/u01/oracle/db/APP2/rdo/APP2_log4A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log4B.rdo') SIZE 10M REUSE,
GROUP 5 ('/u01/oracle/db/APP2/rdo/APP2_log5A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log5B.rdo') SIZE 10M REUSE,
GROUP 6 ('/u01/oracle/db/APP2/rdo/APP2_log6A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log6B.rdo') SIZE 10M REUSE,
GROUP 7 ('/u01/oracle/db/APP2/rdo/APP2_log7A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log7B.rdo') SIZE 10M REUSE,
GROUP 8 ('/u01/oracle/db/APP2/rdo/APP2_log8A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log8B.rdo') SIZE 10M REUSE,
GROUP 9 ('/u01/oracle/db/APP2/rdo/APP2_log9A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log9B.rdo') SIZE 10M REUSE,
GROUP 10 ('/u01/oracle/db/APP2/rdo/APP2_log10A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log10B.rdo') SIZE 10M REUSE;
}

The whole, long output is not shown here, but check, that RMAN was able to open the duplicate database with the RESETLOGS option.

.....
.....
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 28-OCT-08

As the final step, eliminate or uncomment the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT in the INIT.ORA file and restart the database.

initAPP2.ora

### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts
# DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
# LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)

sqlplus / as sysdba
shutdown immediate;
startup;

Total System Global Area 629145600 bytes
Fixed Size 1269064 bytes
Variable Size 251658936 bytes
Database Buffers 373293056 bytes
Redo Buffers 2924544 bytes
Database mounted.
Database opened.

Creating a Duplicate Database to Remote Host
This scenario is exactly the same as described for the local host. Copy the RMAN Backup files to the remote host on the same directory as on the localhost.



cd /u01/backup
scp gentic:/u01/backup/* .

The other steps are the same as described under «Creating a Duplicate Database on the Local Host».

Monday, September 27, 2010

Memory Architecture of an Oracle instance

Oracle Database can set limits on how much virtual memory the database uses for the SGA. It can start instances with minimal memory and allow the instance to use more memory by expanding the memory allocated for SGA components, up to a maximum determined by the SGA_MAX_SIZE initialization parameter.

SGA_MAX_SIZE is not dynamic. You can NOT change the value.

However, as compared to previous versions of Oracle, in 10g SGA_MAX_SIZE does not define the size of memory allocated, but rather the MAXIMUM size that CAN be allocated.

The SGA_TARGET Initialization Parameter

The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the following components:

* Fixed SGA and other internal allocations needed by the Oracle Database instance
* The log buffer
* The shared pool
* The Java pool
* The buffer cache
*

The keep and recycle buffer caches (if specified)
*

Nonstandard block size buffer caches (if specified)
*

The Streams pool

It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier releases in which memory for the internal and fixed SGA was added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the size of the shared memory region allocated by the database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET.

Automatically Managed SGA Components

When you set a value for SGA_TARGET, Oracle Database 10g automatically sizes the most commonly configured components, including:

*

The shared pool (for SQL and PL/SQL execution)
*

The Java pool (for Java execution state)
*

The large pool (for large allocations such as RMAN backup buffers)
*

The buffer cache
*

The Streams pool

You need not set the size of any of these components explicitly. By default the parameters for these components will appear to have values of zero. Whenever a component needs memory, it can request that it be transferred from another component by way of the internal automatic tuning mechanism. This transfer of memory occurs transparently, without user intervention.

The performance of each of these automatically sized components is monitored by the Oracle Database instance. The instance uses internal views and statistics to determine how to distribute memory optimally among the components. As the workload changes, memory is redistributed to ensure optimal performance. To calculate the optimal distribution of memory, the database uses an algorithm that takes into consideration both long-term and short-term trends.

Manually Managed SGA Components

There are a few SGA components whose sizes are not automatically adjusted. The administrator needs to specify the sizes of these components explicitly, if needed by the application. Such components are:

*

Keep/Recycle buffer caches (controlled by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)
*

Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})

The sizes of these components is determined by the administrator-defined value of their corresponding parameters. These values can, of course, be changed any time either using Enterprise Manager or from the command line with an ALTER SYSTEM statement.

The memory consumed by manually sized components reduces the amount of memory available for automatic adjustment. For example, in the following configuration:

SGA_TARGET = 256M
DB_8K_CACHE_SIZE = 32M


The instance has only 224 MB (256 - 32) remaining to be distributed among the automatically sized components.

Automatic Shared Memory Management

In previous database releases, a database administrator (DBA) was required to manually specify different SGA component sizes by setting a number of initialization parameters, including the SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE parameters. Oracle Database 10g includes the Automatic Shared Memory Management feature which simplifies the SGA memory management significantly. In Oracle Database 10g, a DBA can simply specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and the Oracle Database will automatically distribute this memory among various subcomponents to ensure most effective memory utilization.

When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of a workload without requiring any additional configuration. The database automatically distributes the available memory among the various components as required, allowing the system to maximize the use of all available SGA memory.

Consider a manual configuration in which 1 GB of memory is available for the SGA and distributed to the following initialization parameters:

SHARED_POOL_SIZE=128M
DB_CACHE_SIZE=896M


If an application attempts to allocate more than 128 MB of memory from the shared pool, an error is raised that indicates that the available shared pool has been exhausted. There could be free memory in the buffer cache, but this memory is not accessible to the shared pool. You would have to manually resize the buffer cache and the shared pool to work around this problem.

With automatic SGA management, you can simply set the SGA_TARGET initialization parameter to 1G. If an application needs more shared pool memory, it can obtain that memory by acquiring it from the free memory in the buffer cache.

Setting a single parameter greatly simplifies the administration task. You specify only the amount of SGA memory that an instance has available and forget about the sizes of individual components. No out of memory errors are generated unless the system has actually run out of memory.

Automatic SGA management can enhance workload performance without requiring any additional resources or manual tuning effort. With manual configuration of the SGA, it is possible that compiled SQL statements frequently age out of the shared pool because of its inadequate size. This can increase the frequency of hard parses, leading to reduced performance. When automatic SGA management is enabled, the internal tuning algorithm monitors the performance of the workload, increasing the shared pool if it determines the increase will reduce the number of parses required.

Moving, copying or cloning a database from one server to another with different directory structures can be easily accomplished with RMAN

Moving, copying or cloning a database from one server to another with different directory structures can be easily accomplished with RMAN. Imagine that you have a database on one node and you want to copy it to another node without shuting down your database and move your datafiles to a different directory structure… This will be demonstrated here by using RMAN.

ASSUMPTIONS

Source Database

* 10.2.0.4 database online (sid neo) at server1 (app)
* archivelog mode is enabled
* db datafiles are in the directory /opt/oracle/oradata/neo2
* database will be backed up online with RMAN to /u01/backup

Destiny Database

* 10.2.0.4 Oracle Home installed without any database running at server2 (mynode2.com)
* db datafiles must be created / moved to different directory: /opt/oracle/oradata/neo
* only the manual backup created at server1 will be moved to server2

AT SERVER1
Moving, copying or cloning a database from one server to another with different directory structures can be easily accomplished with RMAN. Imagine that you have a database on one node and you want to copy it to another node without shuting down your database and move your datafiles to a different directory structure… This will be demonstrated here by using RMAN.

ASSUMPTIONS

Source Database

* 10.2.0.4 database online (sid neo) at server1 (app)
* archivelog mode is enabled
* db datafiles are in the directory /opt/oracle/oradata/neo2
* database will be backed up online with RMAN to /u01/backup

Logon as oracle user software owner at server1 and set your environment variables. Then open RMAN and backup the source database we want to copy /move / clone.

[oracle@neoface oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@neoface oracle]$ export ORACLE_SID=neo
[oracle@neoface oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@neoface oracle]$ rman target /
RMAN> backup database plus archivelog;

cf_NEO_c-1689570411-20090106-00 (control file backup)
back_NEO_675389594_736_1
back_NEO_675389780_737_1
back_NEO_675390018_738_1
back_NEO_675390293_739_1

Copy those 5 backup files to server2

[oracle@neoface oracle]$ scp /u01/backup/back_NEO* root@mynode2.com:/u01/backup/

Create an initialization file (pfile) from the current spfile. Then copy it to the server2.

[oracle@neoface oracle]$ sqlplus “/ as sysdba”
SQL> create pfile from spfile;
SQL> exit;
[oracle@neoface oracle]$ scp /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora oracle@mynode2.com:/opt/oracle/product/10.2.0/db_1/dbs/initneo.ora/


AT SERVER2

Logon at server2 to do the following steps:

* create the OS directories to hold the datafiles and the admin log files and pfile:
* edit the pfile to modify the instance name in parameters like bdump, udump, etc
* change the onwership of pfile to belong to oracle user
* connect to RMAN and startup the database in nomount mode
* restore the control file from the backup
* mount the database
* validate catalog by crosschecking and cataloging the 4 backups pieces we copied
* rename the datafiles and redolog files and restoring the database

Switch to oracle user and create datafiles directories :

[root@mynode2 root] su – oracle
[oracle@mynode2 oracle]$ mkdir /opt/oracle/admin/neo -p
[oracle@mynode2 oracle]$ cd /opt/oracle/admin/neo
[oracle@mynode2 oracle]$ mkdir cdump udump bdump pfile

[oracle@mynode2 oracle]$ mkdir /opt/oracle/oradata/neo -p

Edit your pfile accordingly your new directory structure:

[oracle@mynode2 oracle]$ vi /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora

Set environment variables and start working on RMAN:

[oracle@mynode2 oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@mynode2 oracle]$ export ORACLE_SID=neo
[oracle@mynode2 oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@mynode2 oracle]$ rman target /
RMAN> startup nomount
RMAN> restore controlfile from ‘/u01/backup/cf_NEO_c-1689570411-20090106-00′;
RMAN> alter database mount ;
RMAN> exit

Now that the database is mounted, we’ll check the correct database SCN from the current log that we’ll use later to recover the database. Take note of your current SCN.

[oracle@mynode2 oracle]$ sqlplus “/ as sysdba”
SQL> select group#, first_change#, status, archived from v$log;

GROUP# FIRST_CHANGE# STATUS ARC
---------- ------------- ---------------- ---
1 336565140 ACTIVE YES
2 336415067 CURRENT NO
3 336523814 INACTIVE YES

SQL> exit;

[oracle@mynode2 oracle]$ rman target /

As we only copied to this server the backup we created at the beggining and we did not copy all the backups we had on server1 we must crosscheck the catalog against the OS files. Run the following commands at RMAN prompt :

RMAN> CROSSCHECK backup;
RMAN> CROSSCHECK copy;
RMAN> CROSSCHECK backup of database;
RMAN> CROSSCHECK backup of controlfile;
RMAN> CROSSCHECK archivelog all;

Now let’s catalog the 4 backup pieces that we copy to this server2:

RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389594_736_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389780_737_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390018_738_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390293_739_1′;

Next, as we changed the directory of our datafiles we must rename the redologs:
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo01.log’ to ‘/opt/oracle/oradata/neo/redo01.log’;
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo02.log’ to ‘/opt/oracle/oradata/neo/redo02.log’;
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo03.log’ to ‘/opt/oracle/oradata/neo/redo03.log’;

If you use BLOCK CHANGE TRACKING to allow fast incremental backups, and if you want to move the datafiles to different directory you must disable this feature and enabling it by specifying the new dir:

RMAN> ALTER DATABASE disable block change tracking;
RMAN> ALTER DATABASE enable block change tracking using file ‘/opt/oracle/oradata/neo/block_change_tracking.f’;

This will avoid errors like ORA-19751 and ORA-19750

Now let’s run the script that will restore our database, renaming the datafiles and recovering until the archivelog with SCN 336415067, the current one.

RMAN> run {
set newname for datafile 1 to “/opt/oracle/oradata/neo/system01.dbf”;
set newname for datafile 2 to “/opt/oracle/oradata/neo/undotbs01.dbf”;
set newname for datafile 3 to “/opt/oracle/oradata/neo/sysaux01.dbf”;
set newname for datafile 4 to “/opt/oracle/oradata/neo/data01.dbf”;
set newname for datafile 5 to “/opt/oracle/oradata/neo/index01.dbf”;
set newname for datafile 6 to “/opt/oracle/oradata/neo/users01.dbf”;
set newname for datafile 7 to “/opt/oracle/oradata/neo/streams.dbf”;
set newname for datafile 8 to “/opt/oracle/oradata/neo/data01brig.dbf”;
set newname for datafile 9 to “/opt/oracle/oradata/neo/index02.dbf”;
restore database;
switch datafile all;
recover database until scn 336415067;
}

RMAN> ALTER DATABASE open resetlogs;

I didn’t manage to avoid errors like ORA-01110 and ORA-01180 at RMAN without using the “until” clause in the “recover database” sentence instead, like most people use it, as the first instruction after the run command.


----------------------------------------------------------------------------------
Renaming the database in the process

Pfile

Changing the dbname and path,
sqlplus / as sysdba
create pfile from spfile;
exit

cd $ORACLE_HOME/dbs
rm -f spfiledbname3.ora
vi initdbname3.ora
then,

remove the first lines
:%s/dbname/dbname3/g

Database

Look for datafile IDs,

list backup of database;

add the rman statements to relocate the datafiles,
run {
set newname for datafile 1 to '/u02/oradata/dbname3/system01.dbf';
set newname for datafile 2 to '/u02/oradata/dbname3/undotbs01.dbf';
set newname for datafile 3 to '/u02/oradata/dbname3/sysaux01.dbf';
set newname for datafile 4 to '/u02/oradata/dbname3/users01.dbf';
set newname for datafile 5 to '/u02/oradata/dbname3/tsname.dbf';
restore database;
}

Brief explanation of how assorted Oracle files can be renamed or moved to a new location

To move or rename a controlfile do the following:

* Alter the control_files parameter using the ALTER SYSTEM comamnd.
* Shutdown the database.
* Rename the physical file on the OS.
* Start the database.

To move or rename a logfile do the following:

* Shutdown the database.
* Rename the physical file on the OS.
* Start the database in mount mode.
* Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
* Open the database.

To move or rename a datafile do the following:

* Shutdown the database.
* Rename the physical file on the OS.
* Start the database in mount mode.
* Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
* Open the database.

ORA-01555: snapshot too old during export EXP-00000: Export terminated unsuccessfully

Seems the other sessions are updating the database during your export.
So use CONSISTENT=N ( which is default).
Dont specify CONSISTENT=Y.
Else
Increase your RBS(in 8i) and look into undo Management in 9i.
There is a long running transaction that needs more undo space than the available one.

sql >show parameter undo


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 0
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS1



If the UNDO_RETENTION initialization parameter is not specified, the default value is 900 seconds.( I must have tweaked mine. Ingore it).

to reset this would be the command.
ALTER SYSTEM SET UNDO_RETENTION = 30000;

quoting docs

Committed undo information normally is lost when its undo space is overwritten by a newer transaction. But for consistent read purposes, long running queries might require old undo information for undoing changes and producing older images of data blocks. The initialization parameter, UNDO_RETENTION, provides a means of explicitly specifying the amount of undo information to retain. With a proper setting, long running queries can complete without risk of receiving the "snapshot too old" error.

Cost Based Optimizer (CBO) and Database Statistics

Cost Based Optimizer (CBO) and Database Statistics

Whenever a valid SQL statement is processed Oracle has to decide how to retrieve the necessary data. This decision can be made using one of two methods:

•Rule Based Optimizer (RBO) - This method is used if the server has no internal statistics relating to the objects referenced by the statement. This method is no longer favoured by Oracle and will be desupported in future releases.

•Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The CBO checks several possible execution plans and selects the one with the lowest cost, where cost relates to system resources.

If new objects are created, or the amount of data in the database changes the statistics will no longer represent the real state of the database so the CBO decision process may be seriously impaired. The mechanisms and issues relating to maintenance of internal statistics are explained below:

•Analyze Statement
•DBMS_UTILITY
•DBMS_STATS
•Scheduling Stats
•Transfering Stats
•Issues
Analyze Statement
The ANALYZE statement can be used to gather statistics for a specific table, index or cluster. The statistics can be computed exactly, or estimated based on a specific number of rows, or a percentage of rows:

ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE INDEX employees_pk COMPUTE STATISTICS;

ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 15 PERCENT;DBMS_UTILITY
The DBMS_UTILITY package can be used to gather statistics for a whole schema or database. Both methods follow the same format as the analyze statement:

EXEC DBMS_UTILITY.analyze_schema('SCOTT','COMPUTE');
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_rows => 100);
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_percent => 15);

EXEC DBMS_UTILITY.analyze_database('COMPUTE');
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_rows => 100);
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_percent => 15);

DBMS_STATS

The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of gathering object statistics. Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers. Once again, it follows a similar format to the other methods:

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);

EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);

EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);

EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);

This package also gives you the ability to delete statistics:

EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMPLOYEES_PK');

Scheduling Stats

Scheduling the gathering of statistics using DBMS_Job is the easiest way to make sure they are always up to date:

SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN
DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;',
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/

The above code sets up a job to gather statistics for SCOTT for the current time every day. You can list the current jobs on the server using the DBA_JOBS and DBA_JOBS_RUNNING views.

Existing jobs can be removed using:

EXEC DBMS_JOB.remove(X);
COMMIT;

Where 'X' is the number of the job to be removed.

Transfering Stats

It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:

SQL> EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
SQL> EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');

This table can then be transfered to another server using your preferred method
(Export/Import, SQLPlus Copy etc.) and the stats imported into the data dictionary as follows:

SQL> EXEC DBMS_STATS.import_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');
SQL> EXEC DBMS_STATS.drop_stat_table('DBASCHEMA','STATS_TABLE');

Issues

•Exclude dataload tables from your regular stats gathering, unless you know they will be full at the time that stats are gathered.
•I've found gathering stats for the SYS schema can make the system run slower, not faster.
•Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only.
•Even if scheduled, it may be necessary to gather fresh statistics after database maintenance or large data loads.

Using EXPLAIN PLAN and TKPROF To Tune Your Applications

Consider the following query and execution plan:
SELECT a.customer_name, COUNT (DISTINCT b.invoice_id) "Open Invoices",
COUNT (c.invoice_id) "Open Invoice Items"
FROM customers a, invoices b, invoice_items c
WHERE b.invoice_status = 'OPEN'
AND a.customer_id = b.customer_id
AND c.invoice_id (+) = b.invoice_id
GROUP BY a.customer_name;

ID PARENT OPERATION OBJECT_NAME
---- ------ ----------------------------------- ------------------------------
0 SELECT STATEMENT
1 0 SORT GROUP BY
2 1 NESTED LOOPS OUTER
3 2 HASH JOIN
4 3 TABLE ACCESS BY INDEX ROWID INVOICES
5 4 INDEX RANGE SCAN INVOICES_STATUS
6 3 TABLE ACCESS FULL CUSTOMERS
7 2 INDEX RANGE SCAN INVOICE_ITEMS_PK
This execution plan is more complex than the previous two, and here you can start to get a feel for the way in which complex operations get broken down into simpler subordinate operations. To execute this query, the database server will do the following: First Oracle will perform a range scan on the invoices_status index to get the ROWIDs of all rows in the invoices table with the desired status. For each ROWID found, the record from the invoices table will be fetched.

This set of invoice records will be set aside for a moment while the focus turns to the customers table. Here, Oracle will fetch all customers records with a full table scan. To perform a hash join between the invoices and customers tables, Oracle will build a hash from the customer records and use the invoice records to probe the customer hash.
Next, a nested loops join will be performed between the results of the hash join and the invoice_items_pk index. For each row resulting from the hash join, Oracle will perform a unique scan of the invoice_items_pk index to find index entries for matching invoice items. Note that Oracle gets everything it needs from the index and doesn’t even need to access the invoice_items table at all. Also note that the nested loops operation is an outer join. A sort operation for the purposes of grouping is performed on the results of the nested loops operation in order to complete the SELECT statement.

It is interesting to note that Oracle choose to use a hash join and a full table scan on the customers table instead of the more traditional nested loops join. In this database there are many invoices and a relatively small number of customers, making a full table scan of the customers table less expensive than repeated index lookups on the customers_pk index. But suppose the customers table was enormous and the relative number of invoices was quite small. In that scenario a nested loops join might be better than a hash join. Examining the execution plan allows you to see which join method Oracle is using. You could then apply optimizer hints to coerce Oracle to use alternate methods and compare the performance.

You may wonder how I got that whole detailed explanation out of the eight line execution plan listing shown above. Did I read anything into the execution plan? No! It’s all there! Understanding the standard inputs and outputs of each type of operation and coupling this with the indenting is key to reading an execution plan.

A nested loops join operation always takes two inputs: For every row coming from the first input, the second input is executed once to find matching rows. A hash join operation also takes two inputs: The second input is read completely once and used to build a hash. For each row coming from the first input, one probe is performed against this hash. Sorting operations, meanwhile, take in one input. When the entire input has been read, the rows are sorted and output in the desired order.

Now let’s look at a query with a more complicated execution plan:
SELECT customer_name
FROM customers a
WHERE EXISTS
(
SELECT 1
FROM invoices_view b
WHERE b.customer_id = a.customer_id
AND number_of_lines > 100
)
ORDER BY customer_name;

ID PARENT OPERATION OBJECT_NAME
---- ------ ----------------------------------- ------------------------------
0 SELECT STATEMENT
1 0 SORT ORDER BY
2 1 FILTER
3 2 TABLE ACCESS FULL CUSTOMERS
4 2 VIEW INVOICES_VIEW
5 4 FILTER
6 5 SORT GROUP BY
7 6 NESTED LOOPS
8 7 TABLE ACCESS BY INDEX ROWID INVOICES
9 8 INDEX RANGE SCAN INVOICES_CUSTOMER_ID
10 7 INDEX RANGE SCAN INVOICE_ITEMS_PK

This execution plan is somewhat complex because the query includes a subquery that the optimizer could not rewrite as a simple join, and a view whose definition could not be merged into the query. The definition of the invoices_view view is as follows:

CREATE OR REPLACE VIEW invoices_view
AS
SELECT a.invoice_id, a.customer_id, a.invoice_date, a.invoice_status,
a.invoice_number, a.invoice_type, a.total_amount,
COUNT(*) number_of_lines
FROM invoices a, invoice_items b
WHERE b.invoice_id = a.invoice_id
GROUP BY a.invoice_id, a.customer_id, a.invoice_date, a.invoice_status,
a.invoice_number, a.invoice_type, a.total_amount;

Here is what this execution plan says: Oracle will execute this query by reading all rows from the customers table with a full table scan. For each customer record, the invoices_view view will be assembled as a filter and the relevant contents of the view will be examined to determine whether the customer should be part of the result set or not.

Oracle will assemble the view by performing an index range scan on the invoices_customer_id index and fetching the rows from the invoices table containing one specific customer_id. For each invoice record found, the invoice_items_pk index will be range scanned to get a nested loops join of invoices to their invoice_items records. The results of the join are sorted for grouping, and then groups with 100 or fewer invoice_items records are filtered out.

What is left at the step with ID 4 is a list of invoices for one specific customer that have more than 100 invoice_items records associated. If at least one such invoice exists, then the customer passes the filter at the step with ID 2. Finally, all customer records passing this filter are sorted for correct ordering and the results are complete.

Note that queries involving simple views will not result in a “view” operation in the execution plan. This is because Oracle can often merge a view definition into the query referencing the view so that the table accesses required to implement the view just become part of the regular execution plan. In this example, the GROUP BY clause embedded in the view foiled Oracle’s ability to merge the view into the query, making a separate “view” operation necessary in order to execute the query.

Also note that the filter operation can take on a few different forms. In general, a filter operation is where Oracle looks at a set of candidate rows and eliminates some based on certain criteria. This criteria could involve a simple test such as number_of_lines > 100 or it could be an elaborate subquery.

In this example, the filter at step ID 5 takes only one input. Here Oracle evaluates each row from the input one at a time and either adds the row to the output or discards it as appropriate. Meanwhile, the filter at step ID 2 takes two inputs. When a filter takes two inputs, Oracle reads the rows from the first input one at a time and executes the second input once for each row. Based on the results of the second input, the row from the first input is either added to the output or discarded.

Oracle is able to perform simple filtering operations while performing a full table scan. Therefore, a separate filter operation will not appear in the execution plan when Oracle performs a full table scan and throws out rows that don’t satisfy a WHERE clause. Filter operations with one input commonly appear in queries with view operations or HAVING clauses, while filter operations with multiple inputs will appear in queries with EXISTS clauses.


An important note about execution plans and subqueries: When a SQL statement involves subqueries, Oracle tries to merge the subquery into the main statement by using a join. If this is not feasible and the subquery does not have any dependencies or references to the main query, then Oracle will treat the subquery as a completely separate statement from the standpoint of developing an execution plan—almost as if two separate SQL statements were sent to the database server. When you generate an execution plan for a statement that includes a fully autonomous subquery, the execution plan may not include the operations for the subquery. In this situation, you need to generate an execution plan for the subquery separately.

Use EXPLAIN PLAN and TKPROF To Tune Your Applications

tkprof [explain=] [sys=n] \
[insert=] [record=] [sort=]

If you invoke TKPROF with no arguments at all, you will get a help screen listing all of the options. This is especially helpful because TKPROF offers many sort capabilities, but you select the desired sort by specifying a cryptic keyword. The help screen identifies all of the sort keywords.

In its simplest form, you run TKPROF specifying the name of a SQL trace file and an output filename. TKPROF will read the trace file and generate a report file with the output filename you specified. TKPROF will not connect to the database, and the report will not include execution plans for the SQL statements. SQL statements that were executed by the SYS user recursively (to dynamically allocate an extent in a dictionary-managed tablespace, for example) will be included in the report, and the statements will appear in the report approximately in the order in which they were executed in the database session that was traced.

If you include the explain keyword, TKPROF will connect to the database and execute an EXPLAIN PLAN statement for each SQL statement found in the trace file. The execution plan results will be included in the report file. As we will see later, TKPROF merges valuable information from the trace file into the execution plan display, making this just about the most valuable way to display an execution plan. Note that the username you specify when running TKPROF should be the same as the username connected in the database session that was traced. You do not need to have a plan table in order to use the explain keyword—TKPROF will create and drop its own plan table if needed.

If you specify sys=n, TKPROF will exclude from the report SQL statements initiated by Oracle as the SYS user. This will make your report look tidier because it will only contain statements actually issued by your application. The theory is that Oracle internal SQL has already been fully optimized by the kernel developers at Oracle Corporation, so you should not have to deal with it. However, using sys=n will exclude potentially valuable information from the TKPROF report. Suppose the SGA is not properly sized on the instance and Oracle is spending a lot of time resolving dictionary cache misses. This would manifest itself in lots of time spent on recursive SQL statements initiated by the SYS user. Using sys=n would exclude this information from the report.

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Sep 27 14:33:22 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /arch
Oldest online log sequence 160
Next log sequence to archive 162
Current log sequence 162
SQL> conn scott/tiger
Connected.
SQL> select * from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
IREPS_REVOKED_CERTS_MASTER TABLE

SQL> ALTER SESSION /* Module glpost.c */ SET sql_trace = TRUE;

Session altered.

SQL> select * from DEPT;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@crisreplication ~]$ cd /u01/app/oracle/
admin/ oradata/ oraInventory_24nov09.tar.gz product/
flash_recovery_area/ oraInventory/ oraInventory.tar.gz utlfile/
[oracle@crisreplication ~]$ cd /u01/app/oracle/admin/trialrep/udump/
[oracle@crisreplication udump]$ ls -ltr
[oracle@crisreplication udump]$ tkprof trialrep_ora_15837.trc trialrep_ora_15837.html

TKPROF: Release 10.2.0.4.0 - Production on Mon Sep 27 14:35:23 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


[oracle@crisreplication udump]$ tkprof trialrep_ora_15837.trc trialrep_ora_15837.html explain=scott/tiger sys=n

TKPROF: Release 10.2.0.4.0 - Production on Mon Sep 27 14:35:47 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


[oracle@crisreplication udump]$


TKPROF: Release 10.2.0.4.0 - Production on Mon Sep 27 14:35:47 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Trace file: trialrep_ora_15837.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

select *
from
DEPT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.06 6 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.13 6 8 0 4

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 57 (SCOTT)

Rows Row Source Operation
------- ---------------------------------------------------
4 TABLE ACCESS FULL DEPT (cr=8 pr=6 pw=0 time=67168 us)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
4 TABLE ACCESS MODE: ANALYZED (FULL) OF 'DEPT' (TABLE)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.06 6 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.13 6 8 0 4

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 15 0.01 0.13 0 0 0 0
Execute 69 0.06 0.14 0 0 0 0
Fetch 80 0.01 0.06 0 216 0 471
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 164 0.09 0.34 0 216 0 471

Misses in library cache during parse: 14
Misses in library cache during execute: 14

1 user SQL statements in session.
69 internal SQL statements in session.
70 SQL statements in session.
1 statement EXPLAINed in this session.
********************************************************************************
Trace file: trialrep_ora_15837.trc
Trace file compatibility: 10.01.00
Sort options: default

1 session in tracefile.
1 user SQL statements in trace file.
69 internal SQL statements in trace file.
70 SQL statements in trace file.
15 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
SCOTT.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
648 lines in trace file.
0 elapsed seconds in trace file.

HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE

HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
-----------------------------------------------------------

OVERVIEW
Oracle Clusterware (CRS) is a new feature of Oracle Database 10g Real
Application Clusters that further differentiates RAC for high availability and
scalability. CRS provides automated management and sophisticated monitoring of
RAC instances and is designed to enhance the overall user experience of cluster
database management.

By default, CRS is configured to auto-start database instances as a part of node
boot and provide lights-out instance failure detection followed by an
auto-restart of the failed instance. However, on some special occasions,
it might be highly desirable to limit the level of protection CRS provides
for a RAC database. Namely, this implies preventing instances from auto-starting
on boot and not auto-restarting failed instances. The latter, however, may be
relaxed to allow a single attempt to restart a failed instance. This way, CRS
will attempt to restore availability of the instance, but avoid thrashing if a
problem that caused the instance to fail also keeps preventing it from
successfully recovering on restart. Either way, the choice to customize this is
left to the DBA.

Oracle Database 10g Real Application Clusters release 10.1.0.4 and above make
it possible to accomplish the above stated change in CRS behavior. This document
lists the steps necessary to limit the level of CRS protection over a RAC
database.

HIGH LEVEL APPROACH
In a nutshell, the procedure amounts to the following two parts
1. Identifying a set of CRS resources that affect the behavior
2. Modifying a few profile attributes to contain special values for the
resources identified in the first step

The following sections will cover both phases in detail.
IDENTIFYING RELEVANT RESOURCES

The automated management of a RAC database is accomplished by modeling various
RAC applications/features as CRS resources. A CRS resource is defined by a
profile, which contains a list of attributes that tell CRS how manage the
resource. CRS resources created to manage a RAC database can be identified as
belonging to a well-known type. There is a finite and relatively small number
of types. The type may be easily identified given the name of a resource: each
name ends with a . For instance, ora.linux.db is of type db,
which happens to mean database. To display the names of the resources managed
by the CRS, use the crs_stat command from a clustered node. The output of the
command is a set of names and states of resources registered on the cluster to
which the node belongs.

Figure 1. Example Listing resource names using crs_stat
$ crs_stat
NAME=ora.linux.ar.us.oracle.com.cs
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE

NAME=ora.linux.ar.us.oracle.com.linux.srv
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE

Additional output is available but has been truncated for clarity's sake


The relevant resources are the resources that belong to the following 4 types:

db - Denotes resources that represent a database
srv -Denotes resources that represent a service member.
cs - Denotes resources that represent a service.
inst - Denotes resources that represent a database instance

The CRS profiles for these resources must be modified for the new CRS behavior
to take effect for all RAC databases installed on the cluster. If, however,
the affect of the change is to be limited to a subset of installed databases,
the list of resources needs to be filtered further. (The rest of this section
should be skipped if the new CRS behavior. is to be in effect for all databases
installed on the cluster.)
Please note that since more than one database may be installed on a cluster,
to modify the level of protection for a particular database, one must identify
the resources that represent entities of this database. This may be easily
accomplished since the names of the resources belonging to the above- stated
types always start with ora.. For instance, ora.linux.db
means that the resource belongs to the database named linux. Only resources of
the above-enumerated types that belong to the selected databases will need to
have their profiles modified.
MODIFYING RESOURCE PROFILES
Please note that Oracle strongly discourages any modifications made to CRS
profiles for any resources starting with . Never make any modifications to
CRS profiles for resources but the ones explicitly described below in
this document.

To modify a profile attribute for a resource, the following steps must be
followed:
1.Generate the resource profile file by issuing the following command:
crs_stat -p resource_name > $CRS_HOME/crs/public/resource_name.cap

2.Update desired attributes by editing the file created in step 1.

3.Commit the updates made as a part of the previous step by issuing the
following command
crs_register -u resource_name

4. Verify the updates have been committed by issuing the following command
crs_stat -p resource_name

For each of the resources identified as a part of the preceding section, the
following modifications must be made:
1.Resources of type inst must have the following attributes modified
AUTO_START must be set to 2
RESTART_ATTEMPTS must be set to 0 or 1. The former value will prevent
CRS from attempting to restart a failed instance at all while the latter
will grant it a single attempt; if this only attempt is unsuccessful,
CRS will leave the instance as is.
2.Resources of type db, srv, cs must have the following attributes modified
AUTO_START must be set to 2

RMAN Terminology

Explanation of RMAN Terminology


RMAN TARGET DATABASE – An RMAN Target Database is the primary database that will be backed up for RAC Grid creation. In RMAN’s terminology, the term target database identifies the database that is undergoing a backup, restore or recovery operation by the RMAN Recovery Manager.


RMAN AUXILIARY DATABASE – An Auxiliary Database is a RAC Grid that will be created as a result of the duplication of the target database. In RMAN’s terminology, Auxiliary instance identifies an instance which RMAN connects in order to execute the duplicate command.


RMAN CHANNEL – An RMAN Channel is a communication pipeline between a RMAN executable and a target or auxiliary database. An RMAN channel consists of a server session on the target or auxiliary database and a data stream from the database to the backup device or vice-versa. RMAN console sends commands to the database using this channel, and the server session running on the database executes the command on behalf of the RMAN Recovery Manager.


RMAN AUTOMATIC CHANNEL ALLOCATION – RMAN Channels can be configured to use a set of default attributes for each operation when a channel is not allocated manually. By default, RMAN configures a channel of device type, DISK, to be used for automatic channel allocation.


RMAN MANUAL CHANNEL ALLOCATION - As the name suggests, a channel can be configured manually for special needs such as increasing the degree of parallelism. Channels can be allocated manually by using the ALLOCATE CHANNEL command in the RUN block of RMAN statement.

Connecting to an Auxiliary Database
To use the DUPLICATE command or to perform RMAN TSPITR, you need to connect to an auxiliary instance. In these examples, assume that these variables have the following meanings.

Variable Meaning
SYS User with SYSDBA privileges

oracle The password for connecting as SYSDBA specified in the target database's orapwd file

trgt The net service name for the target database

rman Owner of the recovery catalog having RECOVERY_CATALOG_OWNER privilege

cat The password for user RMAN specified in the recovery catalog's orapwd file

catdb The net service name for the recovery catalog database

aux The password for connecting as SYSDBA specified in the auxiliary database's orapwd file

auxdb The net service name for the auxiliary database


If the auxiliary database uses password files for authentication, then you can connect using a password for either local or remote access. If you are connecting remotely through a net service name, then authentication through a password file is mandatory.

Connecting to an Auxiliary Database from the Command Line
To connect to an auxiliary instance from the operating system command line, enter the following:

% rman AUXILIARY SYS/aux@auxdb


To connect to the target, auxiliary, and recovery catalog databases, issue the following (all on one line):

% rman TARGET SYS/oracle@trgt CATALOG rman/cat@catdb AUXILIARY SYS/aux@auxdb

Connecting to an Auxiliary Database from the RMAN Prompt
Alternatively, you can start RMAN and connect to the auxiliary database from the RMAN prompt:

% rman
RMAN> CONNECT AUXILIARY SYS/aux@auxdb


To connect to the target, auxiliary, and recovery catalog databases, issue:

% rman
RMAN> CONNECT TARGET SYS/oracle@trgt
RMAN> CONNECT CATALOG rman/cat@catdb
RMAN> CONNECT AUXILIARY SYS/aux@auxdb

Thursday, September 23, 2010

Parsing SQL Statements in Oracle

Parsing SQL Statements in Oracle

One of the first steps Oracle takes in processing a SQL statement is to parse it. During the parsing phase, Oracle will break down the submitted SQL statement into its component parts, determine what type of statement it is (Query, DML, or DDL), and perform a series of checks on it. Two important concepts for the DBA to understand is (1) what are the steps involved in the parse phase and (2) what is the difference between a hard parse and a soft parse.

The Syntax Check & Semantic Analysis


The first two function of the parse phase Syntax Check and Semantic Analysis happen for each and every SQL statement within the database.
Syntax Check
Oracle checks that the SQL statement is valid. Does it make sense given the SQL grammar documented in the SQL Reference Manual? Does it follow all of the rules for SQL?


Semantic Analysis
This function of the parse phase, takes the Syntax Check one step further by checking if the statement is valid in light of the objects in the database. Do the tables and columns referenced in the SQL statement actually exist in the database? Does the user executing the statement have access to the objects and are the proper privileges in place? Are there ambiguities in the statement? For example, consider a statement that references two tables emp1 and emp2 and both tables have a column name. The following statement "select name from emp1, emp where..." is ambiguous; the query doesn't know which table to get name from.

Although Oracle considers the first two functions of the parse phase (checking the validity of the SQL statement and then checking the semantics to ensure that the statement can be properly executed), the difference is sometimes hard to see from the users perspective. When Oracle reports an error to the user during the parse phase, it doesn't just come out and say "Error within the Syntax Function" or "Error within the Semantics Function".

For example, the following SQL statement fails with a syntax error:

SQL> select from where 4;
select from where 4
*
ERROR at line 1:
ORA-00936: missing expression
Here is an example of a SQL statement that fails with a semantic error:

SQL> select * from table_doesnt_exist;
select * from table_doesnt_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist

Hard Parse vs. Soft Parse


We now consider the next and one of the most important functions of Oracle's parse phase. The Oracle database now needs to check in the Shared Pool to determine if the current SQL statement being parsed has already been processed by any other sessions.
If the current statement has already been processed, the parse operation can skip the next two functions in the process: Optimization and Row Source Generation. If the parse phase does, in fact, skip these two functions, it is called a soft parse. A soft parse will save a considerable amount of CPU cycles when running your query. On the other hand, if the current SQL statement has never been parsed by another session, the parse phase must execute ALL of the parsing steps. This type of parse is called a hard parse. It is especially important that developers write and design queries that take advantage of soft parses so that parsing phase can skip the optimization and row source generation functions, which are very CPU intensive and a point of contention (serialization). If a high percentage of your queries are being hard-parsed, your system will function slowly, and in some cases, not at all.

Oracle uses a piece of memory called the Shared Pool to enable sharing of SQL statements. The Shared Pool is a piece of memory in the System Global Area (SGA) and is maintained by the Oracle database. After Oracle completes the first two functions of the parse phase (Syntax and Semantic Checks), it looks in the Shared Pool to see if that same exact query has already been processed by another session. Since Oracle has already performed the semantic check, it has already determined:


Exactly what table(s) are involved

That the user has access privileges to the tables.
Oracle will now look at all of the queries in the Shared Pool that have already been parsed, optimized, and generated to see if the hard-parse portion of the current SQL statement has already been done.

Why not Check the Shared Pool First?


Now that you understand the steps involved in parsing SQL statements, it's time to take it one step further. Oracle will always keep an unparsed representation of the SQL code in the Shared Pool, and that the database will perform a hashing algorithm to quickly located the SQL code. OK, so why doesn't Oracle make this step (checking the Shared Pool for a matching statement) the first step in its parsing phase, before making any other checks.
Even when soft parsing, Oracle needs to parse the statement before it goes looking in the Shared Pool. One of the big reason's for this sequence is the Semantic Check. Consider the following query:

select * from emp;
Assume that this query was first submitted by user "SCOTT" and that the "emp" table in the FROM clause is a table owned by SCOTT. You then submit the same exact query (as a user other than SCOTT) to Oracle. The database has no idea what "emp" is a reference to. Is it a synonym to another table? Is it a view in your schema that references another table? For this reason, Oracle needs to perform a Semantic Check on the SQL statement to ensure that the code you are submitting is going to reference the same exact objects you are requesting in your query.

Then remember that Oracle needs to syntactically parse the query before it can semantically parse it. The hash is good only for finding query strings that are the same; it doesn't help the database figure out if the referenced statements are the same statement as in you execution context.

Thursday, September 16, 2010

Master node at Oracle Clusterware level

[oracle@dbrac1 dump]$ cat $ORA_CRS_HOME/log/`hostname`/cssd/ocssd* |grep master

[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 2
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1
[ CSSD]CLSS-3001: local node number 1, master node number 1

Monday, September 6, 2010

HOW TO APPLY ORACLE PATCH (OPATCH)

HOW TO APPLY ORACLE PATCH (OPATCH)
====================================================




To apply Opatch conditions are db and listener both must be down as opatch will update your current ORACLE_HOME with patches.
in single instance its not possible.
but for RAC instance its possible.
as in RAC there will be two seperate oracle home and two seperate instances running once instance on each oracle_home

use this command:

opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME


when using -local parameter and -oh $ORACLE_HOME this means this patch session will only apply patch to current sourced ORACLE_HOME.


steps before applying patch:
----------------------------

1) check the database status.
wch_db.sql
-----------
select name,
open_mode,
database_name,
created,
log_mode,
platform_name
from v$database;



2) Check the object's invalid.

user_inv.sql
============
SELECT owner,
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;

count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';

3) Take backup of invalid's

create table bk_inv_ as select * from dba_objects
where status='INVALID';

4) check opatch version using
opatch -v
if opatch version is not compatible check the readme file and
download the latest version and uncompress
in $ORACLE_HOME.

5) check oraInst.loc file pointing to your current $ORACLE_HOME or not.
cat /etc/oraInst.loc

inventory_loc=/u01/app/oracle/10.2.0/GlobalOraInv
inst_group=dba

if your server have more then one $ORACLE_HOME then comment the other $ORACLE_HOME and
uncomment the current $ORACLE_HOME

inventory must point to the current $ORACLE_HOME which is getting patched.

6) check free space on $ORACLE_HOME
df -h $ORACLE_HOME

7) chek the utilities like
which ld
which ar
which make
etc as per readme file.

8) unzip the patch
unzip -d /loc_2_unzip p.zip

9) Go the patch directory
cd /loc_2_unzip/patch_number

10) Bring down the listner.
cd $ORACLE_HOME/bin
lsnrctl stop

11) Bring down the database
Shutdown immediate.

12) export opatch
export PATH=$PATH:$HOME:$ORACLE_HOME/OPatch:/bin

13) Start the patch

skip_duplicate
Skips patches to be applied that are duplicates of other patches installed in the Oracle home. Two patches are duplicates if they fix the same set of bugs.

skip_subset
Skips patches to be applied that are subsets of other patches installed in the Oracle home. One patch is a subset of another patch if the former fixes a subset of bugs fixed by the latter.

opatch napply -skip_subset -skip_duplicate

for RAC database then database can be up
as it may be having more then one instance
so you can bring down one instance and listener and apply the patch and open it
and then do the same on another node.
like this db will be up and no user will face issue in outage also.

to apply opatch in RAC instance (We can use -skip_subset -skip_duplicate with Opatch napply only? And is it right that if we are installing patch for the first time on a oracle software we will generally use opatch apply. consider I have applied october cpu patch and I am applying November patch. so while applting Oct patch I shall just use opatch apply and while applying Nov patch i shall use opatch napply -skip_subset -skip_duplicate)
///////////*************************************************888
The readme stats whether you should use opatch apply or opatch napply.

I am not sure whether november CPU patch is a bundle of multiple patch or single patch. If it is bundle of multiple patch then you should be using opatch napply irrespective of whether you applied any patch on your software previously. apply and naply does not depends on previous patches that are appled on your database. It just depend onl current opatch session. If you need to install just one patch currently use opatch apply. And if you want to install multiple patch in a single session use opatch napply.

And regarding -skip_subset and -skip_duplicate you are correct it shoudl be used only with opatch napply.

For more info use below command
opatch apply -help
opatch napply -help

**********************************************************///////////

opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME

when using -local parameter and
-oh $ORACLE_HOME this means this patch session will only apply patch to current ORACLE_HOME only.

--------------------------------------------------------
. All-Node Patch

. Shutdown all Oracle instances on all nodes
. Apply the patch to all nodes
. Bring all nodes up

. Minimum downtime

. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the Oracle instance on node 3
. Startup the Oracle instance on node 3

. (no downtime)

. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the Oracle instance on node 3
. Start the Oracle instance on node 3
-------------------------------------------------------------

14) Once patch installation is completed need to do post patching steps.

a) starup the instance
startup
b) Loading modified sqlfiles into the database.
@$ORACLE_HOME/rdbms/admin/catbundle.sql cpu apply

to check the logs generated
catbundle_CPU__APPLY_.log
catbundle_CPU__GENERATE_.log

c) Recompiling Views in the Database
shutdown immediate
startup upgrade
@$ORACLE_HOME/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown immediate
startup

If it is a RAC instance.
shutdown
startup nomount
alter database set cluster database=false scope=spfile;
shutdown
startup upgrade
@?/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown
startup
alter database set cluster database=true scope=spfile;

restart the database.
cd $CRS_HOME/bin
srvctl start database -d
15) If any invalid objects were reported, run the utlrp.sql script as follows



user_inv.sql
============
SELECT owner,
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;

count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';

if any new invalids seen then again take backup of invalid objects and compile it.

create table bk_inv_ as select * from dba_objects
where status='INVALID';

@?/rdbms/admin/utlrp.sql --- to compile the invalid objects.


16) Confirm that patch has been applied successfully or not at db level also.


post_patch.sql
--------------
col action_time for a40
col action for a15
col namespace for a15
col version for a15
col comments for a40
set pages 1000
set lines 170
select * from registry$history ;