1. System Requirements:
1.
Hardware
Requirements
Physical memory (at least 1.5 gigabyte (GB)
of RAM)
- An amount of swap space equal the amount of RAM
- Temporary space (at least 1 GB) available in /tmp
- A processor type (CPU) that is certified with the version of the Oracle software being installed
- At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displayscorrectly
- All servers that will be used in the cluster have the same chip architecture, for example, all 32-bitprocessors or all 64-bit processors.
- Disk space for software installation locations: You will need at least 4.5 GB of available disk spacefor the Grid Infrastructure home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files,and at least 4GB of available disk space for the Oracle Database home directory.
- Shared disk space: An Oracle RAC database is a shared everything database.All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.
2. Network Hardware Requirements:
- Each node must have at least two network interface cards (NIC), or network adapters.
- Public interface names must be the same for all nodes. If the public interface on one node uses the
network adapter eth0, then you must configure eth0 as the public
interface on all nodes.
Example :
RAC-NODE1:
|
RAC-NODE2:
|
Eth0
|
Eth0
|
- Private interface names should be the same for all nodes as well. If eth1 is the private interface namefor the first node, then eth1 should be the private interface name for your second node.
Example:
RAC-NODE1:
|
RAC-NODE 2:
|
Eth1
|
Eth1
|
- Two Virtual IP are required for each node,and that should be plumbed on public interface.
- VIP should be created on same subnet where public ip is configured.
Subnet example:
Public IP: 160.110.67.176
Virtual IP:160.110.67.253
Example:
RAC-NODE1
|
RAC-NODE2:
|
Eth0:1
|
Eth0:1
|
Note :when
installing grid VIP should be down after
OUI installation VIP should be automatically up.
How to down VIP on linux:
àIfdown ifcfg-eth0:1




characters. Host names using underscores ("_") are not
allowed.
3. IP Address
Requirements:



4. Installation
method:
This document details the steps for installing a 2-node Oracle
11gR2 RAC cluster on Linux:



1 Prepare the cluster nodes for
Oracle RAC:
1. User Accounts:
Creating users and groups for Grid
infrastructure.
In our Case we have created two User.
1) GRID (GRID_HOME).
2) ORACLE (ORACLE_HOME).
NOTE: We recommend different users for the
installation of the Grid Infrastructure (GI) and the Oracle
RDBMS home. The GI will
be installed in a separate Oracle base, owned by user 'grid.' After the grid
install the GI home will be owned by root, and inaccessible to unauthorized
users.
1. Create
OS groups using the command below. Enter these commands as the 'root' user:
#/usr/sbin/groupadd
-g 501 oinstall
#/usr/sbin/groupadd
-g 502 dba
#/usr/sbin/groupadd
-g 504 asmadmin
#/usr/sbin/groupadd
-g 506 asmdba
#/usr/sbin/groupadd
-g 507 asmoper
2. Create
the users that will own the Oracle software using the commands:
#/usr/sbin/useradd -u 501 -g oinstall -G
asmadmin,asmdba,asmoper grid
#/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba
oracle
3.Set the password for the oracle account using the following
command. Replace password with your own
password.
passwd oracle
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated
successfully.
passwd grid
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated
successfully.
Repeat Step 1
through Step 3 on each node in your cluster.
2. Networking:
1.
Determine your cluster name. The cluster name
should satisfy the following conditions:
The cluster name is globally unique throughout your host domain.
The cluster name is at least 1 character long and less than 15 characters long.
The cluster name must consist of the same character set used for host names:
- single-byte alphanumericcharacters (a to z, A to Z, and 0 to 9) and hyphens (-).
2.
Determine the public host name for each node
in the cluster. For the public host name, use the
primary host name of each node. In other words, use
the name displayed by the hostname command for example: racnode1.
It is recommended that redundant NICs are configured with the Linux bonding driver. Active/passive is the preferred bonding method due to its simplistic configuration
3.
Determine the public virtual hostname for
each node in the cluster. The virtual host name is a public node name that is
used to reroute client requests sent to the node if the node is down. Oracle
recommends that you provide a name in the format <public hostname>-vip,
for example: racnode1-vip. The virutal hostname must meet the following
requirements:



4. Determine
the private hostname for each node in the cluster. This private hostname does
not need to be resolvable through DNS
and should be entered in the /etc/hosts file. A common naming convention
for the private hostname is <public
hostname>-pvt.





Active/passive
is the preferred bonding method due to its simplistic configuration.
5.
Define a SCAN DNS name for the cluster that
resolves to three IP addresses (round-robin). SCAN
IPs
must NOT be in the /etc/hosts file, the
SCAN name must be resolved by DNS.
6.
Even if you are using a DNS, Oracle
recommends that you add lines to the /etc/hosts file on each
node,specifying
the public IP, VIP and private addresses. Configure the /etc/hosts file so that
it is similar to the following example:
NOTE:The SCAN IPs MUST NOT be in the /etc/hosts
file. This will result in only 1 SCAN IP for the entire cluster.
#eth0 - PUBLIC
160.110.67.176
ogis1.psas.blr.in ogis1
160.110.67.152 ogis2.psas.blr.in ogis2
#VIP
160.110.67.253 ogis1-vip.psas.blr.in ogis1-vip
160.110.67.254
ogis2-vip.psas.blr.in ogis2-vip
#eth1 -
PRIVATE
192.168.10.11
ogis1-pvt
192.168.10.12
ogis2-pvt
7. If
you configured the IP addresses in a DNS server, then, as the root user, change
the hosts search order in
/etc/nsswitch.conf on all nodes as shown here:
Old:
hosts: files
nisdns
New:
hosts: dns files
nis
8. After modifying the
nsswitch.conf file, restart the nscd daemon on each node using the following
command:
.# /sbin/service
nscd restart
3. Synchronizing the Time on ALL Nodes:
Ensure that the date and time settings on all
nodes are set as closely as possible to the same date and
time.Time may be kept
in sync with NTP with the -x option or by using Oracle Cluster Time
Synchronization Service
(ctssd).
4. Configuring Kernel Parameters:
1. As
the root user add the following kernel parameter settings to /etc/sysctl.conf.
If any of the parameters are already in the /etc/sysctl.conf file, the higher
of the 2 values should be used.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
2. Run the
following as the root user to allow the new kernel parameters to be put in
place:
#/sbin/sysctl–p
3. Repeat
steps 1 and 2 on all cluster nodes.
NOTE: OUI checks the current settings for various
kernel parameters to ensure they meet the minimum
requirements
for deploying Oracle RAC.
5. Set shell limits for the oracle user
To improve
the performance of the software on Linux systems, you must increase the shell
limits for the
oracle user
1. Add the
following lines to the /etc/security/limits.conf file:
grid
soft nproc 2047
grid
hard nproc 16384
grid
soft nofile 1024
grid hard
nofile 65536
oracle
soft nproc 2047
oracle
hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
2. Add or edit the following line in the
/etc/pam.d/login file, if it does not already exist:
Session required pam_limits.so
3. Make the
following changes to the default shell startup file, add the following lines to
the /etc/profile file:
if [ $USER = "oracle" ] || [ $USER =
"grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit
-n 65536
else
ulimit
-u 16384 -n 65536
fi
umask 022
fi
For the C
shell (csh or tcsh), add the following lines to the /etc/csh.login file:
if ( $USER = "oracle" || $USER =
"grid" ) then
limitmaxproc 16384
limit descriptors 65536
endif
4. Repeat this procedure on all other nodes in the cluster.
6. Create the Oracle Inventory
Directory
To create
the Oracle Inventory directory, enter the following commands as the root user:
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory
7. Creating the Oracle Grid Infrastructure Home
Directory
To create
the Grid Infrastructure home directory, enter the following commands as the
root user:
# mkdir -p /u01/11.2.0/grid
# chown -R grid:oinstall /u01/11.2.0/grid
# chmod -R 775 /u01/11.2.0/grid
8.
Creating the Oracle Base Directory:
To create
the Oracle Base directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle
# mkdir /u01/app/oracle/cfgtoollogs --needed to
ensure that dbca is able to run after the rdbms installation.
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle
9. Creating the Oracle RDBMS Home Directory:
To create
the Oracle RDBMS Home directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle/product/11.2.0/db_1
# chown -R oracle:oinstall
/u01/app/oracle/product/11.2.0/db_1
# chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
10. Stage the Oracle Software:




11. Check OS Software Requirements:
The OUI will check for
missing packages during the install and you will have the opportunity to
install them at that point during the prechecks.
NOTE: These Requirements are for 64-bit versions of
Oracle Enterprise Linux 5 and RedHat? Enterprise
Linux 5.
binutils-2.15.92.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.97
elfutils-libelf-devel-0.97
expat-1.95.7
gcc-3.4.6
gcc-c++-3.4.6
glibc-2.3.4-2.41
glibc-2.3.4-2.41 (32 bit)
glibc-common-2.3.4
glibc-devel-2.3.4
glibc-headers-2.3.4
libaio-0.3.105
libaio-0.3.105
(32 bit)
libaio-devel-0.3.105
libaio-devel-0.3.105
(32 bit)
libgcc-3.4.6
libgcc-3.4.6
(32-bit)
libstdc++-3.4.6
libstdc++-3.4.6 (32 bit)
libstdc++-devel 3.4.6
make-3.80
pdksh-5.2.14
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)
The
following command can be run on the system to list the currently installed
packages:
rpm
-q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33
\
elfutils-libelf
\
elfutils-libelf-devel
\
gcc
\
gcc-c++
\
glibc
\
glibc-common
\
glibc-devel
\
glibc-headers
\
ksh
\
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make
\
sysstat
\
unixODBC
\
unixODBC-devel
Any missing RPM from
the list above should be added using the "--aid" of
"/bin/rpm" option to ensure all
dependent packages are
resolved and installed as well.
NOTE: Be sure to check on all nodes that the Linux
Firewall and SE Linux is disabled.
2. Prepare the shared storage for Oracle RAC
This section describes
how to prepare the shared storage for Oracle RAC.


Use the following
guidelines when identifying appropriate disk devices:




permissions to create the files in the path that you specify.
1. Shared Storage:
For this example
installation we will be using ASM for Clusterware and Database storage on top
of SAN
technology. The
following Table shows the storage layout for this implementation:
Block
Device ASMlibName Size Comments
|
/dev/sda OCR_VOTE01 1 GB
ASM Diskgroup for OCR and Voting Disks
/dev/sdb OCR_VOTE02 1 GB
ASM Diskgroup for OCR and Voting Disks
/dev/sdc OCR_VOTE03 1 GB
ASM Diskgroup for OCR and Voting Disks
/dev/sdd ASM_DATA01 2 GB
ASM Data Diskgroup
/dev/sde ASM_DATA02
2 GB ASM Data Diskgroup
/dev/sdf ASM_DATA03
2 GB ASM Data Diskgroup
/dev/sdg ASM_DATA04 2 GB
ASM Data Diskgroup
/dev/sdh ASM_DATA05 2 GB
ASM Flash Recovery Area Diskgroup
/dev/sdi ASM_DATA06 2 GB
ASM Flash Recovery Area Diskgroup
/dev/sdj ASM_DATA07 2 GB
ASM Flash Recovery Area Diskgroup
/dev/sdk ASM_DATA08 2 GB
ASM Flash Recovery Area Diskgroup
|
NOTE:Raw device should be shared between the two nodes
1.1. Partition the Shared Disks:

fdisk /dev/sdb
Command (m for
help): u
Changing
display/entry units to sectors
Command (m for
help): n
Command action
e extended
p primary
partition (1-4)
p
Partition number
(1-4): 1
First sector (61-1048575, default
61): 2048
Last sector or +size or +sizeM or
+sizeK (2048-1048575, default 1048575):
Using default value 1048575
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition
table.
Syncing disks.
2. Load the
updated block device partition tables by running the following on ALL servers
participating in the
cluster:
#/sbin/partprobe
2. Installing and Configuring ASMLib:

NOTE:ASMLib
automatically provides LUN persistence, so when using ASMLib there is no
need
to manually configure
LUN persistence for the ASM devices on the system.
NOTE:The ASMLib kernel driver MUST match the
kernel revision number, the kernel revision number of your system can be
identified by running the "uname -r" command. Also, be sure to
download the set of RPMs which pertain to your platform architecture, in our
case this is x86_64.
Below are the three
RPM’S are the mandatory for creating ASM disks.



2. Install the RPMs by running the following
as the root user:
# rpm -ivh oracleasm-support-2.1.3-1.el5x86_64.rpm
\
oracleasmlib-2.0.4-1.el5.x86_64.rpm \
oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm
3. Configure
ASMLib by running the following as the root user:
NOTE: If
using user and group separation for the installation (as documented here), the
ASMLib driverinterface owner is 'grid' and the group to own the driver
interface is 'asmadmin'.
#/etc/init.d/oracleasm
configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the
Oracle ASM library driver. The following questions willdetermine whether the
driver is loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration:
done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
4. Repeat
steps 2 - 4 on ALL cluster nodes.
3. Using ASMLib to Mark the Shared Disks as Candidate Disks:
To create ASM disks
using ASMLib:
1.
As the root user, use oracleasm to create ASM
disks using the following syntax:
#
/usr/sbin/oracleasmcreatediskdisk_namedevice_partition_name
In this command,
disk_name is the name you choose for the ASM disk. The name you choose must
contain only ASCII capital letters, numbers, or underscores, and the disk name
must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1.
The name of the disk
partition to mark as an ASM disk is the
device_partition_name. For example:
# /usr/sbin/oracleasmcreatedisk ASM1 /dev/sdb1
If you need
to unmark a disk that was used in a createdisk command, you can use the
following syntax as the
root user:
# /usr/sbin/oracleasmdeletediskdisk_name
2.
Repeat step 1 for each disk that will be used
by Oracle ASM.
3. After you have
created all the ASM disks for your cluster, use the listdisks command to verify
their
availability:
# /usr/sbin/oracleasmlistdisks
ASM1
4. On all the other
nodes in the cluster, use the scandisks command as the root user to pickup the
newly
created ASM disks. You
do not need to create the ASM disks on each node, only on one node in the
cluster.
# /usr/sbin/oracleasm scandisks
Scanning system for ASM disks [ OK ]
4. After
scanning for ASM disks, display the available ASM disks on each node to verify
their availability:
#
/usr/sbin/oracleasmlistdisks
ASM1
4. Oracle Grid Infrastructure Install
1. Basic Grid Infrastructure Install (without GNS and IPMI)
As the grid user (Grid
Infrastructure software owner) start the installer by running "runInstaller"
from the
staged installation
media.
NOTE: Be sure the installer is run as the intended
software owner, the only supported method to change the software owner is to
reinstall.