Richmond Cluster (16)

Friday 09 May, 2008 - 20:50

Updated the kernel parameters (refer pp.2-37 to 2-38 of Clusterware Installation for Linux):

$ su -

# cat >>/etc/sysctl.conf

kernel.sem = 250 32000 100 128

kernel.shmmax = 526059520

net.ipv4.ip_local_port_range = 1024 65000

net.core.rmem_default = 1048576

net.core.rmem_max = 1048576

net.core.wmem_default = 262144

net.core.wmem_max = 262144

Updated the shell limits for oracle user (refer p.2-38 of Clusterware Installation for Linux):

$ su -

# cat >>/etc/security/limits.conf

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

# cat >>/etc/pam.d/login

session required pam_limits.so

Updated the security on the raw devices for OCR (refer pp.3-15 to 3-16 of Clusterware Installation for Linux):

$ su -

# chown root:oinstall /dev/raw/raw1 /dev/raw/raw16

# chmod 640 /dev/raw/raw1 /dev/raw/raw16

Updated the security on the raw devices for voting disks (refer pp.3-35 of Clusterware Installation for Linux):

$ su -

# chown oracle:oinstall /dev/raw/raw2 /dev/raw/raw17 /dev/raw/raw32

# chmod 640 /dev/raw/raw2 /dev/raw/raw17 /dev/raw/raw32

Rebooted richmond1

Ran OUI for one (1) node only (richmond1). When I ran the root scripts, I got the following messages:

# /u00/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u00/app/oracle/oraInventory to 770.

Changing groupname of /u00/app/oracle/oraInventory to oinstall.

The execution of the script is complete

# /u00/crs/oracle/product/10/app/root.sh

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

assigning default hostname richmond1 for node 1.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: richmond1 richmond1-priv richmond1

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw2

Now formatting voting device: /dev/raw/raw17

Now formatting voting device: /dev/raw/raw32

Format of 3 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

richmond1

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

Checking the status of the CRS stack:

# /u00/crs/oracle/product/10/app/bin/crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

Clicked OK in OUI to continue, and the post-installation verification failed - presumably because of the VIP.

Ran /u00/crs/oracle/product/10/app/bin/vipca as root. Got the following messages:

CRS-1006: No more members to consider

CRS-0215: Could not start resource 'ora.richmond1.vip'

The logs in /u00/crs/oracle/product/10/app/log/client are next to useless. I cannot find any explanatory message why this resource cannot come online.

Although this resource is not online, the installation is deemed to be successful when the installation verification is retried in OUI. Apparrently, cluvfy just checks for the existence of the resource, not caring if it is online or not.