Configuring HugePages for Oracle on CentOS 7 Robin Hosts
This page is about configuring Oracle to use HugePages in a Robin Systems Oracle 12c database instance running on a CentOS Linux release 7.2.1511 (Core) Robin host or agent server, with the Oracle 12c database itself running in a CentOS release 6.7 (Final) Robin container instance. There are good background references on this topic here. A good discussion in Oracle Database 12c documentation can also be reviewed here.
Set Robin Host to Not Use THP
While it is desireable to use HugePages for Oracle, it is not advised to use Transparent Hugepages (THP) and therefore this feature must be configured for never on the Robin host servers that will host Oracle databases.
Check Current THP Setting of Robin Host
The current setting of THP is checked as shown below. The value [never] is the desired value. If the value shows as [always] then changes must be made to switch the value to [never]. In the example below, I have already done the setting to not use THP and so the value shows as [never].
[root@centos-72a ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@centos-72a ~]#
Change THP Setting of Robin Host
This section assumes that the value of the previous test was [always] which is the default setting. The best way to change the THP setting of the Robin host in a CentOS 7 type system is to add the required parameters to the /etc/default/grub file as shown below. The general approach for these type of changes is described in general for CentOS7 here.
The required addition to the GRUB_CMDLINE_LINUX setting is shown in red bold below. Note that this setting must be within the double quotes (" ") of the entire line as shown below.
[root@centos-72a ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap transparent_hugepage=never"
GRUB_DISABLE_RECOVERY="true"
Once this change is made in the /etc/default/grub file then run the following command to implement the change as shown below.
[root@centos-72a ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-327.28.3.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-ecd485da18684d12a2404fd6ff88a4ce
Found initrd image: /boot/initramfs-0-rescue-ecd485da18684d12a2404fd6ff88a4ce.img
done
[root@centos-72a ~]#
Reboot Robin Host and Check New THP Setting
The new THP setting is checked not only by the commands previously discussed, which show if it is configured to [never] or [always] but also by actually verifying that THP are not in use with the following command as shown below. Note that the "AnonHugePages" displays the number of THP in use. This value should be zero if the configuration to disable THP has been successful after reboot.
[root@centos-72a ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 1811
HugePages_Free: 1802
HugePages_Rsvd: 62
HugePages_Surp: 0
Hugepagesize: 2048 kB
[root@centos-72a ~]#
Set Robin Host to Use HugePages
The Robin Host must be set to use HugePages because this is a kernel-level setting and so the container will not be able to use HugePages unless it is set at the kernel level. References on how to do this are here and here.
Determine Required HugePages
There is a script that is widely used that is described here in section G.1.2 Configuring HugePages on Linux (item 6) in the Oracle 12c Oracle documentation. The script often needs to be modified to add the kernel version on which the Oracle database is running. The script version that was used for this example is shown below and includes the kernel version that the Robin host server is using.
[root@centos-72a oracle]# pwd
/root/robin/oracle
[root@centos-72a oracle]# cat hugepages_setting.sh
#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
do
MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
if [ $MIN_PG -gt 0 ]; then
NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
fi
done
# Finish with results
case $KERN in
'2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
'2.6' | '3.19' | '4.2') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; <-- Must add your kernel version (eg. 3.10)
*) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
# End
[root@centos-72a oracle]#
In this example the kernel version is a 3.10.x kernel which is missing from the hugepages_setting.sh file and so on the first try hugepages_setting.sh fails as shown below. Therefore the hugepages_setting.sh script is edited and the '3.10' kernel is added to the script as shown in the line in bold above and then the script is rerun as shown below.
[root@centos-72a oracle]# uname -a
Linux centos-72a 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@centos-72a oracle]# ./hugepages_setting.sh
Unrecognized kernel version 3.10. Exiting.
[root@centos-72a oracle]# vi hugepages_setting.sh
[root@centos-72a oracle]# ./hugepages_setting.sh
Recommended setting: vm.nr_hugepages = 1811
[root@centos-72a oracle]#
Set Required HugePages
The procedure for setting the required hugepages is described in references at OracleBase, Oracle 12c documentation G.1.2 Configuring HugePages on Linux (items 3,8 and 9), and at DPDK and is shown for this example system below. In this case the commands were run previously so history output is shown.
[root@centos-72b ~]# history | grep vm
sysctl -w vm.nr_hugepages=1811
echo 'vm.nr_hugepages=1811' > /etc/sysctl.d/hugepages.conf
[root@centos-72b ~]# cat /etc/sysctl.d/hugepages.conf
vm.nr_hugepages=1811
[root@centos-72b ~]#
Below both the Robin host (centos-72a) and a Robin agent (centos-72b) are checked to verify that the onboot settings to configure huge pages have worked properly.
[root@centos-72a oracle]# sysctl -a | grep hugepages
vm.hugepages_treat_as_movable = 0
vm.nr_hugepages = 1811
vm.nr_hugepages_mempolicy = 1811
vm.nr_overcommit_hugepages = 0
[root@centos-72a oracle]# ssh centos-72b
Last login: Sat Aug 27 12:38:32 2016 from centos-72a.robinsystems.com
[root@centos-72b ~]# sysctl -a | grep hugepages
vm.hugepages_treat_as_movable = 0
vm.nr_hugepages = 1811
vm.nr_hugepages_mempolicy = 1811
vm.nr_overcommit_hugepages = 0
[root@centos-72b ~]#
Determine the memlock Setting
There is a formula for determing memlock value which corresponds to a given number of hugepages which is the formula that I use here. Oracle 12c database documentation also has a recommendation of a formula here in section G.1.2 Configuring HugePages on Linux (item 3).
Configure File /etc/security/limits.conf for HugePages
Configure the memlock setting in the /etc/security/limits.conf file on the Robin host and the Robin agent servers that will host Oracle databases. Note the required memlock settings are in bold at the end of the file as shown below.
[root@centos-72b ~]# cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means for example that setting a limit for wildcard domain here
#can be overriden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overriden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#<domain> <type> <item> <value>
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# End of file
* soft core unlimited
* hard core unlimited
root soft core unlimited
root hard core unlimited
* soft memlock 3708928
* hard memlock 3708928
[root@centos-72b ~]#
This file should also be configured with the same settings in each Robin Oracle container as shown below.
[root@vnode-39-109 ~]# cat /etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#<domain> <type> <item> <value>
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
* soft memlock 3708928
* hard memlock 3708928
# End of file
You have new mail in /var/spool/mail/root
[root@vnode-39-109 ~]# ls -l /etc/sysctl.d
Verify Oracle Instance Using HugePages
Start the Oracle database instance and check /proc/meminfo again as shown below. There are the HugePages total expected, but the HugePages Free is low, as expected, because presumably at this point Oracle is using the configured HugePages.
[root@vnode-39-109 ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 1811
HugePages_Free: 2
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
[root@vnode-39-109 ~]#
Another check can be done to verify that Oracle processes are actually using hugepages as shown below. Any or all individual processes from this listing can be selected and fed to 'ps -ef' to verify that Oracle processes are using HugePages as shown below.
[root@vnode-39-109 ~]# grep '^VmFlags:.* ht' /proc/[0-9]*/smaps
/proc/2104/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2104/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2104/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2106/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2106/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2106/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2108/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2108/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2108/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2112/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2112/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2112/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2116/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2116/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2116/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2118/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2118/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2118/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2120/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2120/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2120/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2122/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2122/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2122/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2124/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2124/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2124/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2126/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2126/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2126/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2128/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2128/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2128/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2130/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2130/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2130/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2132/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2132/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2132/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2134/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2134/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2134/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2136/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2136/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2136/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2138/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2138/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2138/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2140/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2140/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2140/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2142/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2142/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2142/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2144/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2144/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2144/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2146/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2146/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2146/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2148/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2148/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2148/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2150/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2150/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2150/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2158/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2158/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2158/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2160/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2160/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2160/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2162/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2162/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2162/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2166/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2166/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2166/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2170/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2170/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2170/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2174/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2174/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2174/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2176/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2176/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2176/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2178/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2178/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2178/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2180/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2180/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2180/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2182/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2182/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2182/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2184/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2184/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2184/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2186/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2186/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2186/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2188/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2188/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2188/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2190/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2190/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2190/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2192/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2192/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2192/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2194/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2194/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2194/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2196/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2196/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2196/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2198/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2198/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2198/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2200/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2200/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2200/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2202/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2202/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2202/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2204/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2204/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2204/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2206/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2206/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2206/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2369/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2369/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2369/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2373/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2373/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2373/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2375/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2375/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2375/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2584/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2584/smaps:VmFlags: rd wr sh mr mw me ms de ht
/proc/2584/smaps:VmFlags: rd wr sh mr mw me ms de ht
[root@vnode-39-109 ~]# ps -ef | grep 2584
oracle 2584 1 0 12:04 ? 00:00:00 ora_w002_cdb1
root 3009 2024 0 13:01 pts/5 00:00:00 grep 2584
[root@vnode-39-109 ~]#
Finally a review of the Oracle alert_SID.log file at startup will indicate if 2048K pages (HugePages) have been used by the Oracle instance at startup as shown in bold below.
Starting ORACLE instance (normal) (OS id: 2101)
Sat Aug 27 11:33:58 2016
CLI notifier numLatches:7 maxDescs:519
Sat Aug 27 11:33:58 2016
**********************************************************************
Sat Aug 27 11:33:58 2016
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
Sat Aug 27 11:33:58 2016
Domain name: lxc/vnode-39-109.robinsystems.com
Sat Aug 27 11:33:58 2016
Per process system memlock (soft) limit = 3622M
Sat Aug 27 11:33:58 2016
Expected per process system memlock (soft) limit to lock
SHARED GLOBAL AREA (SGA) into memory: 3618M
Sat Aug 27 11:33:58 2016
Available system pagesizes:
4K, 2048K
Sat Aug 27 11:33:58 2016
Supported system pagesize(s):
Sat Aug 27 11:33:58 2016
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
Sat Aug 27 11:33:58 2016
4K Configured 5 5 NONE
Sat Aug 27 11:33:58 2016
2048K 1811 1809 1809 NONE
Sat Aug 27 11:33:58 2016
**********************************************************************