Oracle EE ASM 12c LXC Ubuntu 14.04

Contents

  1. 1 Overview
  2. 2 Modify /etc/sysctl.conf File on Host Server
  3. 3 Edit /etc/security/limits.conf File on Host Server
  4. 4 Install Oracle Instant Client on Host Server
  5. 5 Install LXC
  6. 6 Create LXC Container for Oracle ASM
  7. 7 Gather Information about LXC Container
  8. 8 Connect to LXC Container via Console
  9. 9 Disconnect from LXC Console
  10. 10 Connect to LXC Containers vis SSH
  11. 11 Add LXC Container to /etc/hosts File
  12. 12 Create required Users and Groups for Oracle Install
  13. 13 Set Passwords and Test su for "grid" and "oracle" Linux Users
  14. 14 Set "umask" for "grid" Linux User
  15. 15 Set "umask" for "oracle" Linux User
  16. 16 Install Required Packages for Oracle Install
  17. 17 Chkconfig "sendmail" off
  18. 18 Install the /etc/sysctl.conf File Required for Oracle
  19. 19 Edit the Container Configuration File
  20. 20 Install device-mapper-multipath on Host Server
  21. 21 Prepare Storage for ASM Diskgroups
  22. 22 Add Command to /etc/rc.local to Set Ownership of ASM Block Device
  23. 23 Restart the LXC Container 
  24. 24 Run the Command to Verify Oracle Kernel Parameter Settings
  25. 25 Stage Oracle 12c EE ASM Installation Media
  26. 26 Set Permissions on Install Media
  27. 27 Unzip Install Media
  28. 28 Install the Oracle "cvuqdisk" rpm Package
  29. 29 Setup Public Key and "no-password" SSH for "grid" Linux user
  30. 30 Run "cluvfy" the Oracle Cluster Verification Utility
  31. 31 Install Oracle Enterprise Edition ASM 12c
    1. 31.1 Login as Linux User "grid" and Test "xclock"
    2. 31.2 Execute Oracle GUI runInstaller
    3. 31.3 Successful Output of Configuration Script 1
    4. 31.4 Successful Output of Configuration Script 2
  32. 32 Check that Expected Grid Infrastructure Processes Are Running
  33. 33 Modify $PATH for root in .bashrc File
  34. 34 Add Environment Variables to .bashrc for "grid" Linux User
  35. 35 Check Status of Oracle Grid Infrastructure (GI)
  36. 36 Shutdown Container and Reboot Host


Install Oracle Enterprise Edition 12c ASM on Ubuntu 14.04.1 (Server or Desktop) in 15 minutes ?
 

Here's how, using LXC (Linux Containers).
..

Overview

Installation of Oracle ASM and Oracle RDBMS have always been problematic on Ubuntu native OS.   There are fundamental differences between the architecture of Ubuntu and the Red-Hat type Linuxes which require special unsupported techniques.  Linux Containers fundamentally change this and make it possible to install Oracle ASM (11g or 12c) and Oracle RDBMS (11g or 12c) on Ubuntu easily and quickly.  The installation of Oracle ASM and RDBMS on Linux Containers running on Ubuntu is not officially supported by Oracle afaik.  Nevertheless, this guide shows how to deploy Oracle Enterprise Edition ASM and RDBMS products on Ubuntu as a proof-oc-concept (POC) and for personal use.  Up until the advent of Linux Containers, the only version of Oracle which had any significant use on Ubuntu Linux was Oracle XE, and even installation of Oracle XE on Ubuntu was not straightforward, both because there is no Oracle-vended *.deb package ("alien" must be used to convert the Oracle-vended Oracle XE *.rpm package to *.deb format) and also because several non-standard tweaks are needed, as described here

The advent of Linux Containers (in particular for this blog post LXC flavor of Linux Containers) alters the picture significantly and makes deployment of mainline Oracle Enterprise Editions products quick, easy and feasible, albeit officially unsupported by Oracle Corporation afaik, to run Oracle Enterprise Edition products on Linux Containers on Ubuntu.

Modify /etc/sysctl.conf File on Host Server

Modify the /etc/sysctl.conf fle on the host server as shown below.  The parts to be added are shown in bold below.

gstanden@vmem1:~$ cat /etc/sysctl.conf
# /etc/sysctl.conf - Configuration file for setting system variables
# See /etc/sysctl.d/ for additional system variables.
# See sysctl.conf (5) for information.
#

#kernel.domainname = example.com

# Uncomment the following to stop low-level messages on console
#kernel.printk = 3 4 1 3

##############################################################3
# Functions previously found in netbase
#

# Uncomment the next two lines to enable Spoof protection (reverse-path filter)
# Turn on Source Address Verification in all interfaces to
# prevent some spoofing attacks
#net.ipv4.conf.default.rp_filter=1
#net.ipv4.conf.all.rp_filter=1

# Uncomment the next line to enable TCP/IP SYN cookies
# See http://lwn.net/Articles/277146/
# Note: This may impact IPv6 TCP sessions too
#net.ipv4.tcp_syncookies=1

# Uncomment the next line to enable packet forwarding for IPv4
#net.ipv4.ip_forward=1

# Uncomment the next line to enable packet forwarding for IPv6
#  Enabling this option disables Stateless Address Autoconfiguration
#  based on Router Advertisements for this host
#net.ipv6.conf.all.forwarding=1


###################################################################
# Additional settings - these settings can improve the network
# security of the host and prevent against some network attacks
# including spoofing attacks and man in the middle attacks through
# redirection. Some network environments, however, require that these
# settings are disabled so review and enable them as needed.
#
# Do not accept ICMP redirects (prevent MITM attacks)
#net.ipv4.conf.all.accept_redirects = 0
#net.ipv6.conf.all.accept_redirects = 0
# _or_
# Accept ICMP redirects only for gateways listed in our default
# gateway list (enabled by default)
# net.ipv4.conf.all.secure_redirects = 1
#
# Do not send ICMP redirects (we are not a router)
#net.ipv4.conf.all.send_redirects = 0
#
# Do not accept IP source route packets (we are not a router)
#net.ipv4.conf.all.accept_source_route = 0
#net.ipv6.conf.all.accept_source_route = 0
#
# Log Martian Packets
#net.ipv4.conf.all.log_martians = 1
#
# Oracle

kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

vm.nr_hugepages = 2060


gstanden@vmem1:~

Edit /etc/security/limits.conf File on Host Server

Add the lines for Oracle at the end of the /etc/security/limits.conf file as shown in bold below.

gstanden@vmem1:~$ cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - a user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4

# Oracle Kernel Parameters

oracle    soft    nproc       2047
oracle    hard    nproc      16384
oracle    soft    nofile      1024
oracle    hard    nofile     65536
oracle    soft    stack      10240
oracle    hard    stack      10240
*         soft    memlock  9873408
*         hard    memlock  9873408

gstanden@vmem1:~$

Install Oracle Instant Client on Host Server

Install Oracle Instant Client on host server as described here.

Install LXC

Install LXC as shown below.  There are some good howto guides here and here regarding LXC installation on Ubuntu. All that is really required is to install the lxc package in simplest form.  Linux Containers LXC package is already installed on my system so your output will differ from that shown below, but the command is the same.

gstanden@vmem1:~$ sudo apt-get install lxc
[sudo] password for gstanden:
Reading package lists... Done
Building dependency tree      
Reading state information... Done
lxc is already the newest version.
The following package was automatically installed and is no longer required:
  linux-image-extra-3.13.0-32-generic
Use 'apt-get autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.
gstanden@vmem1:~$

Create LXC Container for Oracle ASM

Create an Oracle Enterprise Linux 6.5 container as shown below. Note that at the end, information is provided about the just-installed LXC container.

gstanden@vmem1:~$ sudo lxc-create -t oracle -n lxcora10
Host is Ubuntu 14.04
No release specified with -R, defaulting to 6.5
Create configuration file /var/lib/lxc/lxcora10/config
Yum installing release 6.5 for x86_64
.
.
.
( output truncated for brevity )
.
.
.
Complete!
Rebuilding rpm database
Patching container rootfs /var/lib/lxc/lxcora10/rootfs for Oracle Linux 6.5
Configuring container for Oracle Linux 6.5
Added container user:oracle password:oracle
Added container user:root password:root
Container : /var/lib/lxc/lxcora10/rootfs
Config    : /var/lib/lxc/lxcora10/config
Network   : eth0 (veth) on virbr0

gstanden@vmem1:~$

Gather Information about LXC Container

Information about the running container can be gathered as shown below.

gstanden@vmem1:~$ sudo lxc-info -n lxcora10
Name:           lxcora10
State:          STOPPED

gstanden@vmem1:~$ sudo lxc-start -n lxcora10

gstanden@vmem1:~$ sudo lxc-info -n lxcora10

gstanden@vmem1:~$ sudo lxc-info -n lxcora10

Name:           lxcora10
State:          RUNNING
PID:            12383
IP:             10.0.3.110
CPU use:        0.75 seconds
BlkIO use:      15.57 MiB
Memory use:     10.25 MiB
KMem use:       0 bytes
Link:           vethUNYH04
 TX bytes:      1.42 KiB
 RX bytes:      4.53 KiB
 Total bytes:   5.96 KiB
gstanden@vmem1:~$

Connect to LXC Container via Console

The LXC container comes with a default console connection which can be used as shown below.  Notice LXC container is running the same kernel as the host, literally.  Not just the same version.  The exact same kernel. All the Linux Containers share the one kernel.

gstanden@vmem1:~$ sudo lxc-console -n lxcora10

Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself


Oracle Linux Server release 6.5
Kernel 3.13.11.6 on an x86_64
lxcora10 login: root
Password:
Last login: Wed Sep 17 16:10:08 on lxc/tty1
[root@lxcora10 ~]# uname -a
Linux lxcora10 3.13.11.6 #1 SMP Mon Sep 15 11:54:55 CDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@lxcora10 ~]# 

Disconnect from LXC Console

Disconnect from the LXC container console by typing "exit" at the command prompt, then "<ctrl>+a" and then type "q" as shown below.

Oracle Linux Server release 6.5
Kernel 3.13.11.6 on an x86_64

lxcora10 login:

gstanden@vmem1:~$

Connect to LXC Containers vis SSH

The lxc-info command in a previous step provides the IP address.  There is no DNS resolution (unless it has been setup separately using bind9 or dnsmasq) so the IP address is used as shown below.

gstanden@vmem1:~$ sudo lxc-info -n lxcora10
Name:           lxcora10
State:          RUNNING
PID:            12383
IP:             10.0.3.110
CPU use:        0.83 seconds
Memory use:     8.76 MiB
KMem use:       0 bytes
Link:           vethUNYH04
 TX bytes:      1.42 KiB
 RX bytes:      5.16 KiB
 Total bytes:   6.58 KiB
gstanden@vmem1:~$ ssh root@10.0.3.110
The authenticity of host '10.0.3.110 (10.0.3.110)' can't be established.
RSA key fingerprint is fc:4e:2d:4a:54:f5:f2:11:7f:d8:53:6c:a5:e5:1b:f5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.110' (RSA) to the list of known hosts.
root@10.0.3.110's password:
Last login: Wed Sep 17 16:11:28 2014
[root@lxcora10 ~]# uname -a
Linux lxcora10 3.13.11.6 #1 SMP Mon Sep 15 11:54:55 CDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@lxcora10 ~]# hostname
lxcora10
[root@lxcora10 ~]#

Add LXC Container to /etc/hosts File

Add the LXC container to the /etc/hosts file as shown below.

gstanden@vmem1:~$ cat /etc/hosts
127.0.0.1    localhost
127.0.1.1    vmem1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

10.0.3.110    lxcora10.vmem.org    lxcora10
gstanden@vmem1:~$

Create required Users and Groups for Oracle Install

Create the required users and groups for the Oracle ASM install as shown below.  Note, the "oracle" LXC template comes with and "oracle" user installed.  It has UID 500.  Recommend not to change this and to just modify it as shown below for use.  Changing the UID can lead to odd issues with "su -" and "ssh" in some cases.

gstanden@vmem1:~$ ssh root@lxcora10
root@lxcora10's password:
Last login: Thu Sep 18 00:57:04 2014 from 10.0.3.1
[root@lxcora10 ~]# vi create_users.sh
[root@lxcora10 ~]# chmod +x create_users.sh
[root@lxcora10 ~]# cat create_users.sh
groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1300 asmdba
groupadd -g 1201 oper
groupadd -g 1301 asmoper
useradd -u 1098 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
usermod -a -G dba,asmdba,oper,oinstall oracle
mkdir -p /u00/app/12.1.0/grid
mkdir -p /u00/app/grid
chown -R grid:oinstall /u00
mkdir -p /u00/app/oracle
chown oracle:oinstall /u00/app/oracle
chmod -R 775 /u00


[root@lxcora10 ~]# ./create_users.sh
[root@lxcora10 ~]#

Set Passwords and Test su for "grid" and "oracle" Linux Users

Set these as shown below.

[root@lxcora01 ~]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@lxcora01 ~]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@lxcora01 ~]# su - oracle
[oracle@lxcora01 ~]$ exit
logout
[root@lxcora01 ~]# su - grid
[grid@lxcora01 ~]$ exit
logout
[root@lxcora01 ~]#

Set "umask" for "grid" Linux User

Set umask for "grid" as shown below.

[root@lxcora01 ~]# cat .bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

umask 022

[root@lxcora01 ~]#

Set "umask" for "oracle" Linux User

Set "umask" for "oracle" as shown below.

[root@lxcora01 ~]# su - oracle
[oracle@lxcora01 ~]$ vi .bashrc
[oracle@lxcora01 ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# User specific aliases and functions

umask 022
[oracle@lxcora01 ~]$

Install Required Packages for Oracle Install

root@vmem1:/var/lib/lxc/lxcora9/rootfs/root# more packages.sh
yum -y install binutils
yum -y install compat-libcap1
yum -y install compat-libstdc++-33
yum -y install compat-libstdc++-33.i686
yum -y install gcc
yum -y install gcc-c++
yum -y install glibc
yum -y install glibc.i686
yum -y install glibc-devel
yum -y install glibc-devel.i686
yum -y install ksh
yum -y install libgcc
yum -y install libgcc.i686
yum -y install libstdc++
yum -y install libstdc++.i686
yum -y install libstdc++-devel
yum -y install libstdc++-devel.i686
yum -y install libaio
yum -y install libaio.i686
yum -y install libaio-devel
yum -y install libaio-devel.i686
yum -y install libXext
yum -y install libXext.i686
yum -y install libXtst
yum -y install libXtst.i686
yum -y install libX11
yum -y install libX11.i686
yum -y install libXau
yum -y install libXau.i686
yum -y install libxcb
yum -y install libxcb.i686
yum -y install libXi
yum -y install libXi.i686
yum -y install make
yum -y install sysstat
yum -y install unixODBC
yum -y install unixODBC-devel
yum -y install xdpyinfo
yum -y install xorg-x11-apps
yum -y install pdksh
yum -y install libicu
yum -y remove elfutils-libelf-devel.i386
yum -y install ntp
yum -y install sg3-utils
yum -y install xauth
yum -y install xorg-x11-fonts*
yum -y install unzip
yum -y install nfs-utils
chkconfig sendmail off


root@vmem1:/var/lib/lxc/lxcora9/rootfs/root#

Chkconfig "sendmail" off

Sendmail is not needed and will cause LXC container to boot slowly.  Disable it for boot as shown below.

root@lxcora10 # chkconfig sendmail off

Install the /etc/sysctl.conf File Required for Oracle

Install the following /etc/sysctl.conf file into the /etc/sysctl.conf file of container as required by Oracle as shown below.  Add the bold section everything below "# Oracle" to the end of the /etc/sysctl.conf file as shown below.

# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 5368709120

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# Oracle

kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

Edit the Container Configuration File

The container configuration file needs to be edited as shown below to support addition of block devices and other features needed for Oracle ASM.  The last 5 lines at the end (in bold) are the ones that need to be added to the default configuration file for the LXC container.


root@vmem1:/var/lib/lxc/lxcora10# pwd
/var/lib/lxc/lxcora10
root@vmem1:/var/lib/lxc/lxcora10# vi config
root@vmem1:/var/lib/lxc/lxcora10# cat config
# Template used to create this container: /usr/share/lxc/templates/lxc-oracle
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:fd:a0:55
lxc.rootfs = /var/lib/lxc/lxcora10/rootfs
# Common configuration
lxc.include = /usr/share/lxc/config/oracle.common.conf
# Container configuration for Oracle Linux 6.5
lxc.arch = x86_64
lxc.utsname = lxcora10
lxc.cap.drop = sys_resource
lxc.cap.drop = setfcap setpcap
# Networking
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.hwaddr = fe:7e:9e:e5:d6:7c
lxc.cgroup.devices.allow = c 10:236 rwm
lxc.cgroup.devices.allow = b 252:* rwm
lxc.mount.entry = /dev/mapper /var/lib/lxc/lxcora10/rootfs/dev/mapper none defaults,bind,create=dir 0 0
lxc.mount.auto = proc sys cgroup
lxc.mount.auto = proc:rw sys:rw cgroup-full:rw


root@vmem1:/var/lib/lxc/lxcora10#

Install device-mapper-multipath on Host Server

Device-mapper-multipath is used to provide naming services in /dev/mapper for the LUN partitions from a USB stick or an Expresscard.  It is also possible to make some small partitions at the end of the Ubuntu partition, but this is more useful if your Ubuntu install used UEFI which means that an extended logical partition will not be necessary. Simplest is probably to use a USB stick or expresscard.

Prepare Storage for ASM Diskgroups

There are various ways to do this.  For this example a Wintec Filemate 128 Gb all-flash expresscard is used.  It has been partitioned into 17 x 2Gb GPT partitions as shown below. Recommend against using loopback devices.  A USB stick should work also for "physical storage".  Partition the USB stick into "LUNs" and then get the required information to add it to /etc/multipath.conf.

gstanden@vmem1:~$ ls -lrt /dev/mapper/asm*
brw-rw---- 1 root disk 252,  6 Sep 18 17:55 /dev/mapper/asm_disk
brw-rw---- 1 root disk 252,  9 Sep 18 17:55 /dev/mapper/asm_disk6
brw-rw---- 1 root disk 252,  8 Sep 18 17:55 /dev/mapper/asm_disk5
brw-rw---- 1 root disk 252,  7 Sep 18 17:55 /dev/mapper/asm_disk4
brw-rw---- 1 root disk 252,  5 Sep 18 17:55 /dev/mapper/asm_disk3
brw-rw---- 1 root disk 252,  2 Sep 18 17:55 /dev/mapper/asm_disk2
brw-rw---- 1 root disk 252,  1 Sep 18 17:55 /dev/mapper/asm_disk1
brw-rw---- 1 root disk 252, 12 Sep 18 17:55 /dev/mapper/asm_disk9
brw-rw---- 1 root disk 252, 11 Sep 18 17:55 /dev/mapper/asm_disk8
brw-rw---- 1 root disk 252, 10 Sep 18 17:55 /dev/mapper/asm_disk7
brw-rw---- 1 root disk 252, 16 Sep 18 17:55 /dev/mapper/asm_disk13
brw-rw---- 1 root disk 252, 15 Sep 18 17:55 /dev/mapper/asm_disk12
brw-rw---- 1 root disk 252, 14 Sep 18 17:55 /dev/mapper/asm_disk11
brw-rw---- 1 root disk 252, 13 Sep 18 17:55 /dev/mapper/asm_disk10
brw-rw---- 1 root disk 252, 20 Sep 18 17:55 /dev/mapper/asm_disk17
brw-rw---- 1 root disk 252, 19 Sep 18 17:55 /dev/mapper/asm_disk16
brw-rw---- 1 root disk 252, 18 Sep 18 17:55 /dev/mapper/asm_disk15
brw-rw---- 1 root disk 252, 17 Sep 18 17:55 /dev/mapper/asm_disk14
gstanden@vmem1:~$ sudo gdisk -l /dev/sdb
[sudo] password for gstanden:
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 250069680 sectors, 119.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 9A4E5E70-D053-49C1-8217-4CECC833529F
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 250069646
Partitions will be aligned on 2048-sector boundaries
Total free space is 178766445 sectors (85.2 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4196351   2.0 GiB     8300  Linux filesystem
   2         4196352         8390655   2.0 GiB     8300  Linux filesystem
   3         8390656        12584959   2.0 GiB     8300  Linux filesystem
   4        12584960        16779263   2.0 GiB     8300  Linux filesystem
   5        16779264        20973567   2.0 GiB     8300  Linux filesystem
   6        20973568        25167871   2.0 GiB     8300  Linux filesystem
   7        25167872        29362175   2.0 GiB     8300  Linux filesystem
   8        29362176        33556479   2.0 GiB     8300  Linux filesystem
   9        33556480        37750783   2.0 GiB     8300  Linux filesystem
  10        37750784        41945087   2.0 GiB     8300  Linux filesystem
  11        41945088        46139391   2.0 GiB     8300  Linux filesystem
  12        46139392        50333695   2.0 GiB     8300  Linux filesystem
  13        50333696        54527999   2.0 GiB     8300  Linux filesystem
  14        54528000        58722303   2.0 GiB     8300  Linux filesystem
  15        58722304        62916607   2.0 GiB     8300  Linux filesystem
  16        62916608        67110911   2.0 GiB     8300  Linux filesystem
  17        67110912        71305215   2.0 GiB     8300  Linux filesystem
gstanden@vmem1:~$

Add Command to /etc/rc.local to Set Ownership of ASM Block Device

[root@lxcora01 ~]# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

chown grid:asmadmin /dev/mapper/asm_disk1
chmod 0660 /dev/mapper/asm_disk1


[root@lxcora01 ~]#

Restart the LXC Container 

Restart the container as shown below so that the changes in the container configuration take effect.

root@vmem1:/var/lib/lxc/lxcora10# ssh root@lxcora10
The authenticity of host 'lxcora10 (10.0.3.110)' can't be established.
RSA key fingerprint is fc:4e:2d:4a:54:f5:f2:11:7f:d8:53:6c:a5:e5:1b:f5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'lxcora10,10.0.3.110' (RSA) to the list of known hosts.
root@lxcora10's password:
Last login: Thu Sep 18 01:02:27 2014 from 10.0.3.1
[root@lxcora10 ~]# reboot

Broadcast message from root@lxcora10
    (/dev/pts/1) at 1:17 ...

The system is going down for reboot NOW!
[root@lxcora10 ~]# Connection to lxcora10 closed by remote host.
Connection to lxcora10 closed.
root@vmem1:/var/lib/lxc/lxcora10#

Run the Command to Verify Oracle Kernel Parameter Settings

After reboot, /proc/sys is mounted RW so that settings in /etc/sysctl.conf can be read and applied. Use the "sysctl -a" command to verify that /proc/sys is actually rw and to verify the correct values as shown below.  Note that required settings for Oracle ASM EE install are successfully applied.  Not all parameters will configure, but ensure that output is similar to below.

[root@lxcora01 ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
error: permission denied on key 'kernel.sysrq'
error: permission denied on key 'kernel.core_uses_pid'
error: "net.ipv4.tcp_syncookies" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
error: permission denied on key 'fs.file-max'
error: permission denied on key 'fs.aio-max-nr'
net.ipv4.ip_local_port_range = 9000 65500
error: "net.core.rmem_default" is an unknown key
error: "net.core.rmem_max" is an unknown key
error: "net.core.wmem_default" is an unknown key
error: "net.core.wmem_max" is an unknown key
[root@lxcora01 ~]#

[root@lxcora01 ~]# sysctl -a | egrep 'shmmni|file-max|aio-max-nr|shmmax|shmall|sem|local_port_range'

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.sem = 250    32000    100    128
kernel.sem_next_id = -1
kernel.shmall = 4294967296
kernel.shmmax = 68719476736
kernel.shmmni = 4096
net.ipv4.ip_local_port_range = 9000    65500

[root@lxcora01 ~]#

Stage Oracle 12c EE ASM Installation Media

Stage the installation media for Oracle Grid Infrastructure install in /home/grid as shown below.

gstanden@vmem1:~$ sudo su -
root@vmem1:~# cd /var/lib/lxc/lxcora9
root@vmem1:/var/lib/lxc/lxcora9# cd rootfs/home/grid
root@vmem1:/var/lib/lxc/lxcora9/rootfs/home/grid# ls -lrt
total 2337900
drwxr-xr-x 7 1099 gstanden       4096 Jul 11 05:36 grid
-rwxrwxrwx 1 1099 gstanden 1747021093 Aug 10 17:23 p17694377_121020_Linux-x86-64_3of8.zip
-rwxrwxrwx 1 1099 gstanden  646969279 Aug 10 17:35 p17694377_121020_Linux-x86-64_4of8.zip
drwxr-xr-x 2 1099 gstanden       4096 Sep 16 22:57 oraInventory
root@vmem1:/var/lib/lxc/lxcora9/rootfs/home/grid# scp p*.zip ../../../../lxcora10/rootfs/home/grid/.
root@vmem1:/var/lib/lxc/lxcora9/rootfs/home/grid#

Set Permissions on Install Media

As root user, login to the LXC container and set the permissions on the install zip archives to "grid:oinstall" as shown below.

gstanden@vmem1:~$ ssh root@lxcora10
root@lxcora10's password:
Last login: Thu Sep 18 01:29:19 2014 from 10.0.3.1
[root@lxcora10 ~]# cd /home/grid
[root@lxcora10 grid]# ls -lrt
total 2337892
-rwxr-xr-x 1 root root 1747021093 Sep 18 01:34 p17694377_121020_Linux-x86-64_3of8.zip
-rwxr-xr-x 1 root root  646969279 Sep 18 01:34 p17694377_121020_Linux-x86-64_4of8.zip
[root@lxcora10 grid]# chown grid:oinstall p*.zip
[root@lxcora10 grid]# ls -lrt
total 2337892
-rwxr-xr-x 1 grid oinstall 1747021093 Sep 18 01:34 p17694377_121020_Linux-x86-64_3of8.zip
-rwxr-xr-x 1 grid oinstall  646969279 Sep 18 01:34 p17694377_121020_Linux-x86-64_4of8.zip
[root@lxcora10 grid]#

Unzip Install Media

Login as the "grid" user and unzip the install media and begin the install as shown below.

gstanden@vmem1:~$ ssh -Y -C grid@lxcora10
grid@lxcora10's password:
/usr/bin/xauth:  creating new authority file /home/grid/.Xauthority
[grid@lxcora10 ~]$ xclock
[grid@lxcora10 ~]$ cd grid
-bash: cd: grid: No such file or directory
[grid@lxcora10 ~]$ ls -lrt
total 2337892
-rwxr-xr-x 1 grid oinstall 1747021093 Sep 18 01:34 p17694377_121020_Linux-x86-64_3of8.zip
-rwxr-xr-x 1 grid oinstall  646969279 Sep 18 01:34 p17694377_121020_Linux-x86-64_4of8.zip
[grid@lxcora10 ~]$ id
uid=1099(grid) gid=1000(oinstall) groups=1000(oinstall),1100(asmadmin),1200(dba),1300(asmdba),1301(asmoper)
[grid@lxcora10 ~]$ unzip p17694377_121020_Linux-x86-64_3of8.zip

[grid@lxcora10 ~]$ unzip p17694377_121020_Linux-x86-64_4of8.zip

Install the Oracle "cvuqdisk" rpm Package

Install the package as shown below while connected as the "root" user.

[root@lxcora01 ~]# cd /home/grid/grid/rpm
[root@lxcora01 rpm]# rpm -Uvh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
   1:cvuqdisk               ########################################### [100%]
[root@lxcora01 rpm]#

Setup Public Key and "no-password" SSH for "grid" Linux user

Set this up as shown below.

[grid@lxcora01 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Created directory '/home/grid/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
3e:88:42:64:fa:12:51:5f:58:89:7e:42:a3:16:64:7c grid@lxcora01
The key's randomart image is:
+--[ RSA 2048]----+
| o+  +o.         |
| oo.E..          |
|. o*..           |
| =o o .          |
|o..  o  S        |
| +   . o         |
|. o . . o        |
| . .     .       |
|                 |
+-----------------+
[grid@lxcora01 ~]$ cd .ssh
[grid@lxcora01 .ssh]$ cp -p id_rsa.pub authorized_keys
[grid@lxcora01 .ssh]$ cd
[grid@lxcora01 ~]$ ssh lxcora01
The authenticity of host 'lxcora01 (127.0.0.1)' can't be established.
RSA key fingerprint is 55:bd:d9:85:c1:31:0d:8a:2b:76:e8:dd:f9:0a:55:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'lxcora01' (RSA) to the list of known hosts.
[grid@lxcora01 ~]$ exit
logout
Connection to lxcora01 closed.
[grid@lxcora01 ~]$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is 55:bd:d9:85:c1:31:0d:8a:2b:76:e8:dd:f9:0a:55:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Thu Sep 18 20:09:47 2014 from localhost
[grid@lxcora01 ~]$ exit
logout
Connection to localhost closed.
[grid@lxcora01 ~]$ ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is 55:bd:d9:85:c1:31:0d:8a:2b:76:e8:dd:f9:0a:55:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
Last login: Thu Sep 18 20:09:54 2014 from localhost
[grid@lxcora01 ~]$ exit
logout
Connection to 127.0.0.1 closed.
[grid@lxcora01 ~]$ ssh 10.0.3.144
The authenticity of host '10.0.3.144 (10.0.3.144)' can't be established.
RSA key fingerprint is 55:bd:d9:85:c1:31:0d:8a:2b:76:e8:dd:f9:0a:55:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.3.144' (RSA) to the list of known hosts.
Last login: Thu Sep 18 20:10:02 2014 from localhost
[grid@lxcora01 ~]$

Run "cluvfy" the Oracle Cluster Verification Utility

Run the "runcluvfy.sh" script as shown below. The output should be similar to that shown below. 

[grid@lxcora01 grid]$ ./runcluvfy.sh stage -pre crsinst -n lxcora01 -r 12.1 -osdba oinstall -asm -asmgrp asmadmin -asmdev /dev/mapper/asm_disk1

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "lxcora01"


Checking user equivalence...
User equivalence check passed for user "grid"
Package existence check passed for "cvuqdisk"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "10.0.3.0" with node(s) lxcora01
TCP connectivity check passed for subnet "10.0.3.0"


Interfaces found on subnet "10.0.3.0" that are likely candidates for VIP are:
lxcora01 eth0:10.0.3.96

WARNING:
Could not find a suitable set of interfaces for the private interconnect

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.0.3.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.0.3.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "lxcora01:/usr,lxcora01:/var,lxcora01:/etc,lxcora01:/sbin,lxcora01:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "asmadmin" passed
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
    lxcora01
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check failed for "maximum user processes"
Check failed on nodes:
    lxcora01
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"

PRVF-7544 : Check cannot be performed for kernel parameter "rmem_default" on node "lxcora01"
PRVG-2044 : Command "/sbin/sysctl -a | grep rmem_default[[:space:]]*=" failed on node "lxcora01" and produced no output.

Kernel parameter check failed for "rmem_default"
Check failed on nodes:
    lxcora01

PRVF-7544 : Check cannot be performed for kernel parameter "rmem_max" on node "lxcora01"
PRVG-2044 : Command "/sbin/sysctl -a | grep rmem_max[[:space:]]*=" failed on node "lxcora01" and produced no output.

Kernel parameter check failed for "rmem_max"
Check failed on nodes:
    lxcora01

PRVF-7544 : Check cannot be performed for kernel parameter "wmem_default" on node "lxcora01"
PRVG-2044 : Command "/sbin/sysctl -a | grep wmem_default[[:space:]]*=" failed on node "lxcora01" and produced no output.

Kernel parameter check failed for "wmem_default"
Check failed on nodes:
    lxcora01

PRVF-7544 : Check cannot be performed for kernel parameter "wmem_max" on node "lxcora01"
PRVG-2044 : Command "/sbin/sysctl -a | grep wmem_max[[:space:]]*=" failed on node "lxcora01" and produced no output.

Kernel parameter check failed for "wmem_max"
Check failed on nodes:
    lxcora01

Kernel parameter check passed for "aio-max-nr"

PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "lxcora01"

Kernel parameter check failed for "panic_on_oops"
Check failed on nodes:
    lxcora01

Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"

Checking availability of ports "6200,6100" required for component "Oracle Notification Service (ONS)"
Port availability check passed for ports "6200,6100"

Checking availability of ports "42424" required for component "Oracle Cluster Synchronization Services (CSSD)"
Port availability check passed for ports "42424"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed


Checking Devices for ASM...
Package existence check passed for "cvuqdisk"

Checking for shared devices...

PRVG-5150 : could not determine if path /dev/mapper/asm_disk3 is a valid path on all nodes
PRVG-5150 : could not determine if path /dev/mapper/asm_disk2 is a valid path on all nodes
PRVG-5150 : could not determine if path /dev/mapper/asm_disk1 is a valid path on all nodes


Checking consistency of device owner across all nodes...
Consistency check of device owner for "/dev/mapper/asm_disk3" PASSED
Consistency check of device owner for "/dev/mapper/asm_disk2" PASSED
Consistency check of device owner for "/dev/mapper/asm_disk1" PASSED


Checking consistency of device group across all nodes...
Consistency check of device group for "/dev/mapper/asm_disk3" PASSED
Consistency check of device group for "/dev/mapper/asm_disk2" PASSED
Consistency check of device group for "/dev/mapper/asm_disk1" PASSED


Checking consistency of device permissions across all nodes...
Consistency check of device permissions for "/dev/mapper/asm_disk3" PASSED
Consistency check of device permissions for "/dev/mapper/asm_disk2" PASSED
Consistency check of device permissions for "/dev/mapper/asm_disk1" PASSED


Checking consistency of device size across all nodes...
Consistency check of device size for "/dev/mapper/asm_disk3" PASSED
Consistency check of device size for "/dev/mapper/asm_disk2" PASSED
Consistency check of device size for "/dev/mapper/asm_disk1" PASSED

Checking that ASM devices are ASM Filter Driver capable.
ASM Filter Driver compatibility check passed.
Devices check for ASM failed


Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP configuration file "/etc/ntp.conf" existence check passed
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
PRVF-5410 : Check of common NTP Time Server failed
PRVF-5416 : Query of NTP daemon failed on all nodes
Clock synchronization check using Network Time Protocol(NTP) failed


Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
The DNS response time for an unreachable node is within acceptable limit on all nodes

Check for integrity of file "/etc/resolv.conf" failed

Time zone consistency check passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"

Starting check for /dev/shm mounted as temporary file system ...

ERROR:

PRVE-0420 : /dev/shm is not found mounted on node ""
PRVE-0420 : /dev/shm is not found mounted on node ""

Check for /dev/shm mounted as temporary file system failed

Starting check for /boot mount ...

ERROR:

PRVE-10073 : Required /boot data is not available on node "lxcora01"
PRVE-10073 : Required /boot data is not available on node "lxcora01"

Check for /boot mount failed

Starting check for zeroconf check ...

ERROR:

PRVE-10077 : NOZEROCONF parameter was not  specified or was not set to 'yes' in file "/etc/sysconfig/network" on node "lxcora01"

Check for zeroconf check failed

Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@lxcora01 grid]$

Install Oracle Enterprise Edition ASM 12c

Install Oracle EE ASM 12c as shown below using the GUI installer.

gstanden@vmem1:~$ ssh -Y -C grid@lxcora10
grid@lxcora10's password:
[grid@lxcora10 ~]$[grid@lxcora10 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB.   Actual 432514 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 32839 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-09-18_02-17-10AM. Please wait ...[grid@lxcora10 grid]$

Login as Linux User "grid" and Test "xclock"

gstanden@vmem1:~$ ssh -Y -C grid@lxcora10
grid@lxcora10's password:
Last login: Thu Sep 18 02:14:52 2014 from 10.0.3.1
[grid@lxcora10 ~]$ xclock

Execute Oracle GUI runInstaller

[grid@lxcora10 ~]$ ls -lrt
total 2337900
drwxr-xr-x 7 grid oinstall       4096 Jul 11 06:36 grid
-rwxr-xr-x 1 grid oinstall 1747021093 Sep 18 01:34 p17694377_121020_Linux-x86-64_3of8.zip
-rwxr-xr-x 1 grid oinstall  646969279 Sep 18 01:34 p17694377_121020_Linux-x86-64_4of8.zip
drwxr-xr-x 2 grid oinstall       4096 Sep 18 02:17 oraInventory
[grid@lxcora10 ~]$ cd grid
[grid@lxcora10 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB.   Actual 428735 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 32843 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-09-18_09-24-52AM. Please wait ...[grid@lxcora10 grid]$













 



Use the "Browse..." button to change the location of  "Oracle base" as shown below.








Do NOT check box to "Automatically run configuration scripts".  They will be run manually in a terminal window as root during install.


Ignore the warnings shown above.  The list of warnings should only be those shown above.  If there are any additional warnings, they should be corrected and then use "Check Again" to recheck. As long as only the above warnings, are shown, go ahead and check "Ignore All" and then "Next".


Answer "Yes" to the warning dialog box about ignoring some of the prerequisites as shown above.




Optionally save the "response file" which is a file which can be used later for non-GUI installations (e.g. "silent" installations). Review the install and click "Install" to begin the installation. Monitor the progress of the installation windows as shown below.




At the "Setup Oracle Base" step the "Execute Configuration Scripts" window opens.


Click on the "Script Location" of Script Number 1 to highlight it and use <ctrl>+c to copy it to the clipboard (or alternatively just read the script from the window).  Open a new terminal window into lxcora10 LXC container and run the scripts as listed, first Script Number 1, and then Script Number 2.  They MUST be run sequentially, and not in parallel.  The Script Number 1 must complete successfully, and then the Script Number 2 can be run, and it also must complete successfully BEFORE proceeding with the GUI install.

Successful Output of Configuration Script 1

Be sure to run as the "root" user while logged on to the LXC container as shown below.

[root@lxcora10 ~]# /u00/app/oraInventory/orainstRoot.sh
Changing permissions of /u00/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u00/app/oraInventory to oinstall.
The execution of the script is complete.
[root@lxcora10 ~]#

Successful Output of Configuration Script 2

[root@lxcora10 ~]# /u00/app/grid/product/12.1.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/product/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node lxcora10 successfully pinned.
2014/09/18 10:08:23 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'


lxcora10     2014/09/18 10:08:56     /u00/app/grid/product/12.1.0/grid/cdata/lxcora10/backup_20140918_100856.olr     0    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lxcora10'
CRS-2673: Attempting to stop 'ora.evmd' on 'lxcora10'
CRS-2677: Stop of 'ora.evmd' on 'lxcora10' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lxcora10' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/09/18 10:09:13 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

[root@lxcora10 ~]#

Click on "Ok" on the "Execute Configuration Scripts" window upon successful completion of both scripts to continue the install as shown below.  The Oracle Configuration Assistant programs run as shown below.  All Configuration Assistants should complete with a status of "Succeeded" with a green checkmark next to all of them.  If any fail, attempt to diagnose and fix issue, and then click "Retry" button.


If all the Configuration Assistants are successful, the "Oracle Cluster Verification Utility" step will flash by almost instantaneously and the "Successful Install" screen will be displayed as shown below.

Check that Expected Grid Infrastructure Processes Are Running

Check the running "grid" linux user processes as shown below.  The output should be similar to that shown below.

[root@lxcora10 ~]# ps -ef | grep grid
root       440   295  0 09:21 ?        00:00:00 sshd: grid [priv]
grid       442   440  0 09:21 ?        00:00:24 sshd: grid@pts/0
grid       443   442  0 09:21 pts/0    00:00:00 -bash
grid      4895     1  1 10:08 ?        00:00:05 /u00/app/grid/product/12.1.0/grid/bin/ohasd.bin reboot
grid      4965     1  0 10:09 ?        00:00:04 /u00/app/grid/product/12.1.0/grid/bin/oraagent.bin
grid      4977     1  0 10:09 ?        00:00:02 /u00/app/grid/product/12.1.0/grid/bin/evmd.bin
grid      4994  4977  0 10:09 ?        00:00:03 /u00/app/grid/product/12.1.0/grid/bin/evmlogger.bin -o /u00/app/grid/product/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l / u00/app/grid/product/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
grid      5202     1  0 10:12 ?        00:00:00 /u00/app/grid/product/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
grid      5353     1  0 10:12 ?        00:00:01 /u00/app/grid/product/12.1.0/grid/bin/cssdagent
grid      5376     1  0 10:12 ?        00:00:02 /u00/app/grid/product/12.1.0/grid/bin/ocssd.bin
grid      5486     1  0 10:13 ?        00:00:00 asm_pmon_+ASM
grid      5488     1  0 10:13 ?        00:00:00 asm_psp0_+ASM
grid      5490     1  0 10:13 ?        00:00:00 asm_vktm_+ASM
grid      5494     1  0 10:13 ?        00:00:00 asm_gen0_+ASM
grid      5496     1  0 10:13 ?        00:00:00 asm_mman_+ASM
grid      5500     1  0 10:13 ?        00:00:00 asm_diag_+ASM
grid      5502     1  0 10:13 ?        00:00:00 asm_dia0_+ASM
grid      5504     1  0 10:13 ?        00:00:00 asm_dbw0_+ASM
grid      5506     1  0 10:13 ?        00:00:00 asm_lgwr_+ASM
grid      5508     1  0 10:13 ?        00:00:00 asm_ckpt_+ASM
grid      5510     1  0 10:13 ?        00:00:00 asm_smon_+ASM
grid      5512     1  0 10:13 ?        00:00:00 asm_lreg_+ASM
grid      5514     1  0 10:13 ?        00:00:00 asm_pxmn_+ASM
grid      5516     1  0 10:13 ?        00:00:00 asm_rbal_+ASM
grid      5518     1  0 10:13 ?        00:00:00 asm_gmon_+ASM
grid      5520     1  0 10:13 ?        00:00:00 asm_mmon_+ASM
grid      5522     1  0 10:13 ?        00:00:00 asm_mmnl_+ASM
grid      5535     1  0 10:14 ?        00:00:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid      5700     1  0 10:14 ?        00:00:00 asm_ars0_+ASM
root      5865   667  0 10:18 pts/1    00:00:00 grep grid
[root@lxcora10 ~]#

Modify $PATH for root in .bashrc File

Modify the $PATH variable for "root" user so that root can run the "crsctl" executable to check Oracle GI status as shown below.

[root@lxcora10 ~]# pwd
/root
[root@lxcora10 ~]# vi .bashrc
[root@lxcora10 ~]# cat .bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

export PATH=$PATH:/u00/app/grid/product/12.1.0/grid/bin
[root@lxcora10 ~]# . .bashrc
[root@lxcora10 ~]#

Add Environment Variables to .bashrc for "grid" Linux User

Add environment variables for "grid" linux user as shown below.

[grid@lxcora01 ~]$ vi .bashcr
[grid@lxcora01 ~]$ . .bashrc
[grid@lxcora01 ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# User specific aliases and functions

umask 022
export ORACLE_HOME=/u00/app/grid/product/12.1.0/grid
export GRID_HOME=/u00/app/grid/product/12.1.0/grid
export ORACLE_SID=+ASM
export PATH=$PATH:$ORACLE_HOME/bin
[grid@lxcora01 ~]$

Check Status of Oracle Grid Infrastructure (GI)

Check status of Oracle GI as shown below. The "ora.ons" and "ora.diskmon" will both by default have status "OFFLINE OFFLINE".  All others should show status "ONLINE ONLINE" as shown below.

[root@lxcora10 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       lxcora10                 STABLE
ora.SYSTEMDG.dg
               ONLINE  ONLINE       lxcora10                 STABLE
ora.asm
               ONLINE  ONLINE       lxcora10                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      lxcora10                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       lxcora10                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       lxcora10                 STABLE
--------------------------------------------------------------------------------
[root@lxcora10 ~]#

Shutdown Container and Reboot Host

Shutdown the container and reboot the host to verify that the system restarts successfully after a reboot of the host server OS.

gstanden@vmem1:~$ sudo lxc-stop -n lxcora10
[sudo] password for gstanden:
gstanden@vmem1:~$ sudo lxc-info -n lxcora10
Name:           lxcora10
State:          STOPPED
gstanden@vmem1:~$ 

After reboot verify that the Oracle GI software stack including listener and SYSTEMDG ASM diskgroup come up normally indicating device is correctly configured as shown below.  The "ora.ons" and "ora.diskmon" will have default status of "OFFLINE OFFLINE".  All others should show "ONLINE ONLINE".

[root@lxcora01 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       lxcora01                 STABLE
ora.SYSTEMDG.dg
               ONLINE  ONLINE       lxcora01                 STABLE
ora.asm
               ONLINE  ONLINE       lxcora01                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      lxcora01                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       lxcora01                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       lxcora01                 STABLE
--------------------------------------------------------------------------------
[root@lxcora01 ~]#






Comments