1) Assumptions
============
> Already built and configured a two-node Oracle RAC 10g Release 2 environment
OS : OEL 5.6
DATABASE : 10.2.0.1.0
ASM
node names RAC1 and RAC2
> Already cloned and made third vmware machine rac3...The process will be exactly same as ws done when we were creating 2-node RAC setup.
> Each node in the existing Oracle RAC cluster has a copy of the Oracle Clusterware and Oracle Database software installed on their local disks. The current two-node Oracle RAC environment does not use shared Oracle homes for the Clusterware or Database software
> The software owner for the Oracle Clusterware and Oracle Database installs will be "oracle". It is important that the UID and GID of the oracle user account be identical to that of the existing RAC nodes. For the purpose of this example, the oracle user account will be defined as follows:
$ id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
> The existing Oracle RAC 10g environment makes use of a clustered file system (OCFS2) to store the two files required to be shared by Oracle Clusterware; namely the Oracle Cluster Registry (OCR) file and the Voting Disk
> Automatic Storage Management (ASM) is being used as the file system and volume manager for all Oracle physical database files (data, online redo logs, control files, archived redo logs) and a Flash Recovery Area (Used by RMAN)
> To maintain the current naming convention, the new Oracle RAC node to be added to the existing cluster will be named RAC3 (running a new instance named BRIJ3) making it a three-node cluster.
> The new Oracle RAC node should have the same operating system version and installed patches as the current two-node cluster.
> During the creation of the existing two-node cluster, the installation of Oracle Clusterware and the Oracle Database software were only performed from one node in the RAC cluster — namely from RAC1 as the oracle user account. The Oracle Universal Installer (OUI) on that particular node would then use the ssh and scp commands to run remote commands on and copy files (the Oracle software) to all other nodes within the RAC cluster. The oracle user account on the node running the OUI (runInstaller) had to betrusted by all other nodes in the RAC cluster. This meant that the oracle user account had to run the secure shell commands (ssh or scp) on the Linux server executing the OUI (RAC1) against all other Linux servers in the cluster without being prompted for a password. The same security requirements hold true for this article. User equivalence will be configured so that the Oracle Clusterware and Oracle Database software will be securely copied from RAC1 to the new Oracle RAC node (RAC3) using ssh and scp without being prompted for a password.
2) NETWORK DETAILS OF NODES
===========================
First node details
Device* IP Address* Subnet mask *Default gateway address
eth0 * 192.168.2.131 * 255.255.255.0 * 192.168.2.1
eth1 * 10.10.10.31 * 255.255.255.0 * <leave empty>
Second node details
Device* IP Address* Subnet mask *Default gateway address
eth0 * 192.168.2.132 * 255.255.255.0 * 192.168.2.1
eth1 * 10.10.10.32 * 255.255.255.0 * <leave empty>
Third node details
Device* IP Address* Subnet mask *Default gateway address
eth0 * 192.168.2.133 * 255.255.255.0 * 192.168.2.1
eth1 * 10.10.10.33 * 255.255.255.0 * <leave empty>
3) For implementing the ssh on node 3 do below step by oracle user on RAC3 node
===================================================================
ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac1:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
4) Add entries in /etc/hosts file
========================
Update entries in all /etc/hosts entries
Above should be the format of entries in all the nodes.
5) CHECK IF SSH IS WORKING FINE
=============================
Test the below commands from our new node RAC3 and also test from other nodes RAC1 and RAC2
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
ssh rac1.mycorpdomain.com date
ssh rac2.mycorpdomain.com date
ssh rac1-priv.mycorpdomain.com date
ssh rac2-priv.mycorpdomain.com date
6) Edit Cluster Configuration File
==========================
Edit the cluster configuration file /etc/ocfs2/cluster.conf to read as follows on ALL THREE nodes
[root@rac1 ~]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.2.131
number = 0
name = rac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.2.132
number = 1
name = rac2
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.2.133
number = 2
name = rac3
cluster = ocfs2
cluster:
node_count = 3
name = ocfs2
7) Configure Oracle Clusterware
==========================
The installation of Oracle Clusterware and Oracle database will pushed from rac1 node. So you do not need the software CD.
On the first node rac1, execute the script $CRS_HOME/oui/bin/addNode.sh
On the Welome screen click Next.
On screen Specify Cluster Nodes to Add to Installation, type rac3 for the Public Node Name and the rest of the columns will be filled automatically.
Click Next
Cluster Node Addition Summary, click Install.
Once all required Clusterware components are copied from rac1 to rac3, OUI prompts to execute three files:
/u01/app/oracle/oraInventory/orainstRoot.sh - run on node rac3 /u01/app/oracle/product/10.2.0/crs/install/rootaddnode.sh - run on node rac1 /u01/app/oracle/product/10.2.0/crs/root.sh - run on node rac3
Manually configure VIP address using vipca GUI as root user - <<< CAN BE SKIPPED >>>
cd /u01/app/oracle/product/10.2.0/crs/bin ./vipca
Upon completion of the Oracle Clusterware installation, check the following files:
[root@rac3 root]# ls -ltr /etc/init.d/init.* -r-xr-xr-x 1 root root 3197 Aug 13 23:32 /etc/init.d/init.evmd -r-xr-xr-x 1 root root 35401 Aug 13 23:32 /etc/init.d/init.cssd -r-xr-xr-x 1 root root 4721 Aug 13 23:32 /etc/init.d/init.crsd -r-xr-xr-x 1 root root 1951 Aug 13 23:32 /etc/init.d/init.crs
The operating system provided inittab file is updated with the following entries.
[root@rac3 root]# tail -5 /etc/inittab # Run xdm in runlevel 5 x:5:respawn:/etc/X11/prefdm -nodaemon h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
Click OK after all the listed scripts have run on all nodes.
End of Installation, click Exit
Verify if the Clusterware has all the nodes registered using the olsnodes command.
[oracle@rac3 oracle]$ olsnodes rac1 rac2 rac3
Verify if the cluster services is started, using the crs_stat command.
[oracle@rac3 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....B1.inst application ONLINE ONLINE rac1 ora....B2.inst application ONLINE ONLINE rac2 ora.BRIJ.db application ONLINE ONLINE rac2 ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ora.rac3.gsd application ONLINE ONLINE rac3 ora.rac3.ons application ONLINE ONLINE rac3 ora.rac3.vip application ONLINE ONLINE rac3
Verify if the VIP services are configured: ifconfig -a
8) Install Oracle Database Software
============================
To install Oracle software on the new node, execute the program in the $ORACLE_HOME/oui/bin/addNode.sh on configured node: rac1
run $ORACLE_HOME/oui/bin/addNode.sh
At Welcome screen, click Next
Select the new node: rac3 and click Next
Verify if the new node is listed and click Install
When complete, run root.sh script from another window as the root user on the new node: rac3
Click on OK and Exit
run $ORACLE_HOME/bin/netca to configure the default listener
9) Add New Database Instance
========================
To add the database instance to the new node, run dbca GUI on configured node: rac1
run $ORACLE_HOME/bin/dbca on node: rac1
At Welcome screen, select Oracle Real Application Cluster database and click Next
Step 1 of 6: Operations screen, select Instance Management
Step 2 of 6: Select Instance Management, select Add an Instance
Step 3 of 6: List of cluster databases, select BRIJ database and enter SYS and password
Step 4 of 6: List of cluster database instances - The DBCA lists all the instances currently available on the cluster and just click Next
Step 5 of 6: Instance naming and node selection - DBCA lists the new instance name BRIJ3 and requests the node on which to add the instance. Select rac3 and click Next
Step 6 of 6: Instance Storage - In this screen, DBCA will list the instances specific files such as undo tablespaces, redo log groups, and just click Finish and OK
DBCA starts creating the database.... BRIJ3
Click Yes, to extend ASM to node rac3
Click Yes, to configure LISTENER_RAC3
When asked "Do you want to perform another operation?" Click No
At this stage, the following is true:
The clusterware has been installed on node rac3 and is now part of the cluster
The Oracle software has been installed on node rac3
The ASM3 and new Oracle instance BRIJ3 has been created and configured on rac3
Verify that the new node is successfully installed and configured:
View V$ACTIVE_INSTANCES from any of the participating instances. For example:
SQL> select * from v$active_instances; INST_NUMBER INST_NAME ----------- ------------------------------------------------------------ 1 rac1.mycorpdomain.com:brij1 2 rac2.mycorpdomain.com:brij2 3 rac3.mycorpdomain.com:brij3
Verify that OCR is aware of the new instance in the cluster:
[oracle@oradb1 oracle]$ srvctl status database -d BRIJ
Now you have successfully added the 3rd node in the Oracle 10g RAC cluster group.
10) CHECK SERVICES
==================
$ ./crs_stat -t
Name Type Target State Host
-------------------------------------------------------------------------------------------
ora....j1.inst application ONLINE ONLINE rac1
ora....j2.inst application ONLINE ONLINE rac2
ora....j3.inst application ONLINE ONLINE rac3
ora.brij.db application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora....SM3.asm application ONLINE ONLINE rac3
ora....C3.lsnr application ONLINE ONLINE rac3
ora.rac3.gsd application ONLINE ONLINE rac3
ora.rac3.ons application ONLINE ONLINE rac3
ora.rac3.vip application ONLINE ONLINE rac3
All the services should be online.