Following the successful creation of two (2) VM images under XEN with OEL7 and using the revised network design, I now go ahead with the installation of Grid Infrastructure 12.1.0.2 on that cluster.
Following the procedure in 3.2 Installing the Oracle Preinstallation RPM From Unbreakable Linux Network, I ran the following command on both REDFERN1 and REDFERN2 (assuming what is valid for OEL6 is also valid for OEL7):
yum install oracle-rdbms-server-12cR1-preinstall
There was a large amount of output which I did not capture, but the installation appeared to be successful.
Changed the password for the oracle user on both REDFERN1 and REDFERN2.
Unfortunately, the use of hostname does not permanently change the hostname. I had to do the following (as root) on REDFERN1, and a similar thing on REDFERN2:
cat >>/etc/sysctl.conf <<DONE kernel.hostname = redfern1.yaocm.id.au DONE reboot
Create the Oracle Grid Infrastructure Software Home as follows on both REDFERN1 and REDFERN2:
sudo mkdir /opt/app/grid_infra sudo chown oracle:oinstall /opt/app/grid_infra
From CRONULLA, the OEM agent was pushed out to both REDFERN1 and REDFERN2. I ignored the warning about RHEL 7 not being supported.
Ran the following command to change the ownership of the shared disks so that they can be discovered by ASM:
sudo chown oracle:dba /dev/xvd[d-h]
The ownership of these shared disks was verified as follows:
ls -l /dev/xvd[d-h]
And the ouput is:
brw-rw----. 1 oracle dba 202, 48 Jan 1 20:48 /dev/xvdd brw-rw----. 1 oracle dba 202, 64 Jan 1 20:48 /dev/xvde brw-rw----. 1 oracle dba 202, 80 Jan 1 20:48 /dev/xvdf brw-rw----. 1 oracle dba 202, 96 Jan 1 20:48 /dev/xvdg brw-rw----. 1 oracle dba 202, 112 Jan 1 20:48 /dev/xvdh
Instead of unloading the GRID software onto the local host, I now use the NFS directory as set up through Use NFS for Oracle Software.
From PENRITH, I ran the following commands in an XTerm session:
xhost + 192.168.1.140 ssh -Y oracle@192.168.1.140
Once on REDFERN1, I ran the following commands:
cd /opt/share/Software/grid/linuxamd64_12102/grid ./runInstaller
Left the default option as Install and Configure Oracle Grid Infrastructure:
Clicked Next.
Left the default option as Configure a Standard Cluster:
Clicked Next.
Changed option to Advanced Installation:
Clicked Next.
Left the default language English:
Clicked Next.
Made the following changes:
Scan Name:
Configure GNS:
redfern-crs.yaocm.id.au
No
Clicked Next.
Got the following screen:
Clicked Add....
Added the following details for REDFERN2 as follows:
Clicked OK to get the following screen:
Clicked SSH connectivity..., and entered the password:
Clicked Setup. After a few minutes, the following message appears:
Clicked OK. Then clicked Next.
Got the following screen—no changes were made:
Clicked Next.
No changes were made on the following screen:
Clicked Next.
Got a warning message which was expanded to get the following details:
Clicked OK.
Clicked Back to go back to Step 7 and made the following change to use eth1 as private only:
Clicked Next and on the next screen as well.
On the following screen,
Clicked on Change Discovery Path... to /dev/xvd* as follows:
Clicked OK.
Created VOTE disk group on /dev/xvdh:
Clicked Next.
Got a warning message which was expanded to get the following details:
Clicked OK.
Clicked Cancel to stop the installation.
I shutdown the REDFERN cluster.
On VICTORIA, I increased the size of the shared VOTE_01 disk as follows:
cd /OVS/running_pool/REDFERN/shared dd if=/dev/zero of=VOTE_01 bs=1G count=6
The REDFERN cluster was then started.
Once the cluster was running, I had to change the ownership of the shared disks again:
sudo chown oracle:dba /dev/xvd[d-h]
The installation process was started again.
Now the screen is:
Clicked Next.
Filled in the passwords on the following screen:
Clicked Next.
Only one (1) option was available on the following screen:
Clicked Next.
Filled in the OEM details as follows:
Clicked Next.
Filled in the OS group details as follows:
Clicked Next.
Got the following message after expanding the details:
Clicked Yes.
Set up the software location as follows:
Clicked Next.
Since this is a home system, I can use root's password as follows:
Clicked Next.
The following progress screen appears:
After a while, the following results appeared:
There is only one (1) failed check. The details for the resolv.conf integrity are:
This can be ignored as the domain (yaocm.id.au) is only defined on my home network.
Clicked Close, then Fix & Check Again. The following screen appears:
Clicked OK to let OUI run the fix-up script using the credentials provided in Step 15. The following screen shows that the fix-up was successful:
However, the following problems remain:
Chose to ignore all of these problems, and clicked Next. The following warning appears:
Clicked Yes.
The following summary screen appears:
Clicked Install.
The following progress screen appears:
After a while, the following message appears:
Clicked Yes.
However, this failed with the following error message:
According to the log at ~/oraInventory/logs/installActions2016-01-02_12-50-14PM.log:
2016/01/02 21:01:27 CLSRSC-12: The ASM resource ora.asm did not start 2016/01/02 21:01:27 CLSRSC-258: Failed to configure and start ASM Died at /opt/app/grid_infra/12.1.0/grid/crs/install/crsinstall.pm line 2017. The command '/opt/app/grid_infra/12.1.0/grid/perl/bin/perl -I/opt/app/grid_infra/12.1.0/grid/perl/lib -I/opt/app/grid_infra/12.1.0/grid/crs/install /opt/app/grid_infra/12.1.0/grid/crs /install/rootcrs.pl -auto -lang=en_AU.UTF-8' execution failed
And in /opt/app/grid_infra/12.1.0/grid/cfgtoollogs/crsconfig/rootcrs_redfern1_2016-01-02_08-53-53PM.log:
2016-01-02 21:01:21: Executing cmd: /opt/app/grid_infra/12.1.0/grid/bin/crsctl status resource ora.asm -init 2016-01-02 21:01:22: Checking the status of ora.asm 2016-01-02 21:01:27: Executing cmd: /opt/app/grid_infra/12.1.0/grid/bin/clsecho -p has -f clsrsc -m 12 2016-01-02 21:01:27: Command output: > CLSRSC-12: The ASM resource ora.asm did not start >End Command output