This document explains the step by step process of Removing/Deleting RAC Node from Cluster. In this process, I am going to remove a single node (node2-pub) from 2 node RAC cluster online without affecting the availability of the RAC Database running on ASM.
Existing RAC Architecture:
RAC Nodes:
Node 1:
Public: node1-pub
Private: node1-prv
Virtual: node1-vip
Node 2:
Public: node2-pub
Private: node2-prv
Virtual: node2-vip
ORACLE_HOMES (Local on Each Node);
CRS_HOME: /u01/app/crs
DB_HOME: /u01/app/oracle/product/11g/
ASM_HOME: /u01/app/asm/product/11gr1
Database / ASM::
DB Name: test
DB Instances: test1 on node1-pub, test2 on node2-pub.
ASM Instances: +ASM1 on node1-pub and +ASM2 on node2-pub.
Node to be deleted: node2-pub
Tasks List (to be executed in Order):
Modify the Database Service configuration
Remove Database Instance test2 on node2-pub
Remove ASM Instance on node2-pub
Remove LISTENER on node2-pub
Remove DB_HOME, ASM_HOME on node2-pub
Remove the nodeapps on node2-pub
Update Inventory on remaining nodes for DB_HOME and ASM_HOME
Remove Oracle Clusterware (crs) on node2-pub
Update the CRS Inventory with the node list on the remaining nodes (node1-pub)
verify that the node node2-pub is removed from the Cluster.
Remove Database Instance test2 on node2-pub
Remove ASM Instance on node2-pub
Remove LISTENER on node2-pub
Remove DB_HOME, ASM_HOME on node2-pub
Remove the nodeapps on node2-pub
Update Inventory on remaining nodes for DB_HOME and ASM_HOME
Remove Oracle Clusterware (crs) on node2-pub
Update the CRS Inventory with the node list on the remaining nodes (node1-pub)
verify that the node node2-pub is removed from the Cluster.
Get the current status of CRS on all the nodes before proceeding to delete of node excercise. From the below output, it seems that all the nodeapps, DB and ASM instances and services are up and running on both the nodes. Point to Note here is that the Database reource ora.test.db is running on node2-pub which we want to delete from Cluster.
Modify Database Services:
Update the Database service to run on all the nodes except the node that is being deleted. This can be achieved by modifying the service by providing the appropriate instnaces value as a preferred instances where we want this service to be run at startup. So, in my case, the preferred instance will be "test1" where we want test_srv to run after deleting test2
This task is also being taken care by the dbca as part of deleting Instance.
srvctl status service -d test
srvctl stop service -d test -s test_srv -i test2
srvctl config service -d test
srvctl modify service -d test -s test_srv -n -i test1
srvctl config service -d test
Remove all the Database instaces using dbca on the node that is being deleted. In my case, there is only one DB instance test2
runnng on this node. The Instacnes on node2-pub should be up and running before you start dbca to delete instance.
Below are the screenshots of dbca deleting Instance test2 on node2-pub. Execute dbca from any node in the cluster other than
the ones that arebeing deleted. In my case, I execute it from node1-pub.
At the end, verify that there is no instance test2 running on the node node2-pub. Verify that the thread belonging to the deleted
instance test2 is no longer existed in the database 'test'. Make sure that the db resource (ora.test.db) is not running on node2-pub.
If so, relocate this resource to run on any node except the one that is being deleted.
If everything looks OK then proceeds to deleting asm instance on node2-pub. Verify the asm status as well as crs status in the cluster.
srvctl config database -d test
crs_relocate ora.test.db (if required)
crs_stat -t
Remove ASM Instance +ASM2 on node2-pub:
srvctl stop asm -n node2-pub
srvctl remove asm -n node2-pub
srvctl config asm -n node2-pub
srvctl config asm -n node1-pub
crs_stat -t
Remove LISTENER on node2-pub:
After deleting the ASM instance on node2-pub, remove the LISTENER running on this node using netca utility.
Execute the netca from the ASM_HOME if LISTENER is configured in ASM_HOME.
At the End, Stop the Nodeapps on node2-pub.
srvctl stop nodeapps -n node2-pub
Remove DB HOME and ASM_HOME from node2-pub:DB_HOME:
- Connect to the node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to DB HOME (in my case, it is /u01/app/oracle/product/11g/
db_2) - Update the Oracle Inventory with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from the rest of the nodes in CLUSTER so that runInstaller will only remove the ORACLE_HOME from nbode2-pub local node.
- DeInstall the ORACLE_HOME.
export ORACLE_HOME=/u01/app/oracle/product/11g/db_2
echo $ORACLE_HOME
cd /u01/app/oracle/product/11g/db_2/oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local
./runInstaller -ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" -local
NOTE: 11g is not certified on CentOS and so I have to use ignoreSysPrereqs option.
Repeat the same procedure for the ASM HOME.
ASM_HOME:
Remove nodeapps from node2-pub:
Remove nodeapps on node2-pub. Connect as oracle on any of the node and execute below command.
Make sure that the nodeapps are not ONLINE on node2-pub. If So, then stop it before removing them.
Update Inventory of DB_HOME and ASM_HOME on the remaining Nodes:
After removing DB_HOME and ASM_HOME from node2-pub, it is requred to update the Inventories for these HOMEs on the remaining Nodes in Cluster with the new list of remaining Nodes. Execute the below commands to update Inventory from any of the remaining Node. The CLUSTER_NODES option must contain the list of all the nodes except the ones that are being deleted. In my case of 2-node RAC, the available nodes will be only one node i.e, node1-pub.
For DB_HOME:
- Connect to the node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to ASM HOME (in my case, it is /u01/app/asm/product/11gr1)
- Update the Oracle Inventory with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from the rest of the nodes in CLUSTER so that runInstaller will only remove the ORACLE_HOME from nbode2-pub local node.
- DeInstall the ORACLE_HOME.
export ORACLE_HOME=/u01/app/asm/product/11gr1
echo $ORACLE_HOME
cd /u01/app/asm/product/11gr1/oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local
./runInstaller -ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" -local
Remove nodeapps from node2-pub:
Remove nodeapps on node2-pub. Connect as oracle on any of the node and execute below command.
Make sure that the nodeapps are not ONLINE on node2-pub. If So, then stop it before removing them.
srvctl remove nodeapps -n node2-pub
Update Inventory of DB_HOME and ASM_HOME on the remaining Nodes:
After removing DB_HOME and ASM_HOME from node2-pub, it is requred to update the Inventories for these HOMEs on the remaining Nodes in Cluster with the new list of remaining Nodes. Execute the below commands to update Inventory from any of the remaining Node. The CLUSTER_NODES option must contain the list of all the nodes except the ones that are being deleted. In my case of 2-node RAC, the available nodes will be only one node i.e, node1-pub.
For DB_HOME:
- Connect to the node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to DB HOME (in my case, it is /u01/app/oracle/product/11g/
db_2 ) - Update the Oracle Inventory with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable).
export ORACLE_HOME=
/u01/app/oracle/product/11g/db_2
echo $ORACLE_HOME
cd
/u01/app/oracle/product/11g/db_2/
oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub
For ASM_HOME:
- Connect to the node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to ASM HOME (in my case, it is /u01/app/asm/product/11gr1)
- Update the Oracle Inventory with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable)
export ORACLE_HOME=
/u01/app/asm/product/11gr1
echo $ORACLE_HOME
cd
/u01/app/asm/product/11gr1/
oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub
Remove CRS (Clusterware) from node2-pub:
Prepare node2-pub for the CRS removal:
Connect to the node being deleted (node2-pub) as root and execute the rootdelete.sh script to prepare it for the CRS removal.
/u01/app/crs/install/rootdelete.sh local nosharedvar nosharedhome
Remove CRS from node2-pub (Update OCR):From any of the remaining nodes other than the one that is being deleted, execute the rootdeletenode.sh script as a root user to remove the node2-pub from the OCR.
You need node name as well as node number for the node that is being deleted. You can get this information by running olsnodes -n command line utility.
/u01/app/crs/install/rootdeletenode.sh ,
/u01/app/crs/install/rootdeletenode.sh node2-pub,2
Connect to any of the remaining nodes and execute the below command to update the inventory with the proper no.of Nodes in cluster for the CRS_HOME.The inventory has alredy been updated for the DB_HOME as well as ASM_HOME. In my case, I connect to node1-pub and run below command.
export ORACLE_HOME=
/u01/app/crs
echo $ORACLE_HOME
cd
/u01/app/crs/
oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub CRS=TRUE
Verify that the node has been removed successfully by looking at OCR through various command like olsnodes. Also, run the lsinventory to make sure that the Inventory does not know of the node that has been deleted. (in case of 2 or nore node RAC system).
On the Deleted Nodes, remove the OS Directory for DB_HOME, ASM_HOME as well as CRS_HOME.
****** Node node2-pub has been deleted from the Cluster Successfully!!! *****
No comments:
Post a Comment