Search This Blog

Sunday, December 13, 2015

RAC 2 Node Installation Step by Step

Installation

Openfiler

go to vmware
new machine
custom
choose iso file
guest os = other
version = other 64bit
name = SAN
choose location
ram = 512 mb
network type = use host only networking
i/o type = default
disk type = default
create a new disk
maximum disk size = 10gb
store virtual disk as a single file
finish and power on
accept all defaults
Disk setup here make partitions like below from the 10gb
/ = 2g
/usr = 2g
/var = 2g
/tmp = 2g
swap = Fill to maximum allowable size
network devices
eth0 - edit - manual ip address = 147.43.0.5 / 255.255.0.0
host name = san.dba.com
root password = redhat
install
at the end we get a button for reboot.  before clicking it to restart the machine go to vm settings and add a disk space of 50gb for LUN creation.
reboot the pc



RAC1

New Machine
custom
choose iso file for oel 5 64 bit
name = RAC1 and choose the location
3 or 4 gb ram
use host-only networking
default i/o and disk type
new virtual disk
max disk size 60 gb
store virtual disk as a single file
click customize hardware
remove usb controller
remove sound card
remove printer
add another network adapter with host only networking
finish and power on
Create custom layout
create partitions like below from the 60 gb allocated
/ 15000
/tmp 5000
/usr 5000
/var 5000
/u01 25000
swap fill to maximum allowable size
The GRUB boot loader , click next
network devices
eth0 147.43.0.1 / 255.255.0.0
eth1 192.168.0.1 / 255.255.255.0
hostname = rac1.tzone.com
root password = redhat
Choosing Packages
Select Customize Now
Desktop Environment
GNOME Desktop Environment
Applications
Editors all, Remove Games and Entertainment,
Graphical Internet All, remove Graphics,
remove Office/Productivity, remove Sound and Video,
Text-based Internet all
Development
Development Libraries all, Java Development all
Servers
remove Printing Support, Server Configuration Tools all
Base System
Administration Tools, Base, remove Dialup Networking,
Java, Legacy Software Support,
System Tools(here we get most of the oracle packages),
X Windows System
Cluster Storage
nothing
Clustering
nothing
Virtualization
nothing

after reboot disable firewall and selinux.
restart the machine
shut down the machine
create full independent clone of rac1 as rac2.
this can be done only when rac1 machine is closed.
once the cloning process is over start the machine rac2
open a terminal on root level
start neat command to open network settings window
in rac1 we have 2 network cards, here we get 4.
there are 2 cards as .bak which are backup for original.
delete these 2 .bak network cards.
go to the properties of other two which are eth0, eth1
change their ipaddresses to the following
eth0 147.43.0.2 / 255.255.0.0
eth1 192.168.0.2 / 255.255.255.0
activate both and shut down the machine

now start all 3 machines one by one in the following sequence
1. SAN
2. RAC1
3. RAC2

now check ifconfig all both the machines
check hostname on both
vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
### PUBLIC-IP
147.43.0.1 rac1.tzone.com rac1
147.43.0.2 rac2.tzone.com rac2
###PRIVATE-IP
192.168.0.1 rac1-pri.tzone.com rac1-pri
192.168.0.2 rac2-pri.tzone.com rac2-pri
###VIP
147.43.0.11 rac1-vip.tzone.com rac1-vip
147.43.0.12 rac2-vip.tzone.com rac2-vip
###SCAN
147.43.0.51 rac-scan.tzone.com rac-scan
wq!
copy the same file to rac2
scp -v /etc/hosts 147.43.0.2:/etc/hosts
to make sure copy is done properly go to rac2
cat /etc/hosts

check fdisk -l on both to see the partitions and openfiler 50gb
its not visible yet
to make both rac1 and rac2 aware of openfiler space execute the following command from both:
iscsiadm -m discovery -t st -p 147.43.0.5
ping 147.43.0.5 (to SAN)
ping 147.43.0.2 (to RAC2)
ping 147.43.0.1 (to self)


fdisk -l (to make sure rac1 detected 50 gb from openfiler)
to open openfiler admin in browser from RAC1
firefox https://147.43.0.5:446/
username = openfiler
password = password
Once you login click on System
go below to Network Access Configuration
here make a new entries for both the nodes
name Network/host netmask
rac1.tzone.com 147.43.0.1 255.255.255.255 update
rac2.tzone.com 147.43.0.2 255.255.255.255 update

Now click on Volumes
click on link create new physical volumes
we get a table here with info
under Edit Disk click on the link for 50gb space
go down and click on create button
then there is a right menu, in which click on Manage Volumes
enter volume group name = volgrp
select physical volumes to add check box
click on add volume group button

Now click on services
Enable iSCSI target service

Go back to Volumes
in the right hand side menu click on Add Volume
go down to Create a volume in "volgrp"
Volume Name = vol1
Volume Description = for_rac
Required Space = Maximum (drag the bar to right end)
Filesystem/Volume type = iSCSI
Click on Create button

In right side menu click on iSCSI Targets
click on Add button
click on LUN mapping
click on map button
then click on Network ACL link next to LUN Mapping
change both ip addresses Access property to Allow and Update
Thats it. done.
now click logout to exit this web administration GUI
go to rac1
on terminal
fdisk -l
it will not show new LUN hard disk partition
we need to make the rac1 detect it
#iscsiadm -m discovery -t st -p 147.43.0.5
#service iscsi restart
now
#fdisk -l
it will show the 50 gb added as LUN on SAN device

now go to rac2
#fdisk -l
here its not showing
#iscsiadm -m discovery -t st -p 147.43.0.5
#service iscsi restart
#fdisk -l
it will show the LUN space


Create a 2 partitions in 50 gb space we added in SAN
partition 1 will be of 10gb and 2nd one 40gb.
fdisk /dev/sdb
primary partition
partition number 1
default starting cylinder and +10g
another one same partition number 2 and +40g size.
w to save the partitions.
To update the kernel with new partition information
partprobe /dev/sdb (do this on both the nodes)
fdisk -l
we can see 2 new partitions added.

User Creation for grid and rdbms installations
userdel -r oracle
groupdel dba
groupdel oinstall
groupadd -g 5001 oinstall
groupadd -g 5002 dba
groupadd -g 5003 asmadmin
useradd -u 5004 -g oinstall -G dba,asmadmin -d /u01/home -m oracle
chown -R oracle:oinstall /u01
chmod -R 775 /u01
passwd oracle
enter password and confirm
do the same steps in rac2 node
Share the software setup folder with vmware on rac1.
cd /mnt/hgfs/11gr2p3/
with setup files there are 3 rpm related to asm.
copy them also to rac2
scp -v oracleasm* rac2:/
these files also go to the / folder or rac2
rac1
rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm --force --nodeps
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm --force --nodeps
rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm --force --nodeps
rac2
rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm --force --nodeps
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm --force --nodeps
rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm --force --nodeps

RAC1
#oracleasm configure -i
default user to own driver interface = oracle
default group to own the driver interface = oinstall
start oracle ASM library driver on boot = y
scan for oracle ASM disks on boot = y
#oracleasm exit
#oracleasm init
Rac2
#oracleasm configure -i
default user to own driver interface = oracle
default group to own the driver interface = oinstall
start oracle ASM library driver on boot = y
scan for oracle ASM disks on boot = y
#oracleasm exit
#oracleasm init

rac1
#oracleasm createdisk OCR_VD /dev/sdb1
#oracleasm createdisk DATA /dev/sdb2
#oracleasm listdisks
rac2
#oracleasm scandisks
#oracleasm listdisks


on both nodes
#date
keep another terminal open in rac1 to access rac2
#ssh rac2 (will connect to rac2 from inside rac1)
time must be same on both the nodes

rac1
#mv /etc/ntp.conf /etc/ntp.conf_bkp
#service ntpd restart

rac2
#mv /etc/ntp.conf /etc/ntp.conf_bkp
#service ntpd restart

rac1
su - oracle
cd /mnt/hgfs/11gr2p3_64Bit/
ls
cd grid
./runInstaller
skip software updates
install and configure oracle grid infrastructure for a cluster
advanced installation
cluster name = rac-cluster
SCAN name = rac-scan
scan port = 1521
uncheck Configure GNS
Cluster node informtion
Add
Public hostname = rac2.tzone.com
vitual hostname = rac2-vip.tzone.com
click on SSH Connectivity
OS username = oracle and enter its pwd and click Setup
there will be a confirmation message.
click ok and then next
network interface usage
here eth0 public and eth1 private, click next

oracle automatic storage management (oracle ASM)
Create ASM Disk Group
Disk Group Name = OCR
Redundancy = External
AU Size = 1mb
Candidate Disks
select ORCL:OCR_VD
click next
ASM Password
use manager as pwd for both SYS, ASMSNMP users

Do not use Intelligent Platform Management Interface(IPMI)
operatign system groups
oinstall
oinstall
asmadmin
say YEs and continue for Next
Installation Location
Oracle Base: /u01/app/oracle
Software Location: /u01/app/11.2.0/grid
it gives msg as invalid, say yes and click next
Create Inventory
Inventory Directory: /u01/app/orainventory
Prerequisite Checks
click on Fix and Check again button
it gives u a script
copy that script and execute it on rac1 and rac2
click ok again to recheck
this time click on ignore all and click next
on summary window click on Install
at 76% it will give 2 scripts
execute then at root user on rac1 and rac2
there is an error msg in the end that one verification utility has been failed.  this msg is there in the video also.
so until now grid infra home installation is successfull.
rac1 and as a oracle user
cd /u01/home/11.2.0/grid
ls
pwd
cd
now make grid.env file to use with grid home
at oracle user home directory
su - oracle
$vi grid.env
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH:.
wq!
$. grid.env (to set the grid parameters)
$crsctl check crs
$crsctl check cluster -all

scp -v grid.env rac2:/u01/home
rac2
. grid.env
crsctl check crs
crsctl check cluster -all

rac1
$crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

$cd /mnt/hgfs/11gr2p3_64Bit
ls
cd database
ls
./runInstaller
uncheck Email
yes
skip software updates
install database software only
create real application clusters database installation
let rac1 and rac2 checked and click next
english next
enterprise edition
software location = /u01/home/app/11.2.0/db_1
oracle base = /u01/home/app
Database Administrator group = dba
database operator (osoper) group = oinstall
prerequisite checks
ignore all
next
install
at 94% you will get a script. run it on root user terminal on both the nodes.
ssh rac2
password
execute the same script
then click ok
wait until installation is over

rac1
at oracle user
cd
ls
cd /u01/home/app/11.2.0/db_1
pwd
ls
cd
$vi rdbms.env
export ORACLE_HOME=/u01/home/app/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH:.
wq!
. rdbms.env
scp -rv rdbms.env rac2:/u01/home

export ORACLE_SID=+ASM1
asmca
create
Disk Group Name = DATA
Redundancy = External
select member disks = show eligible
ORCL_DATA
OK
EXIT THE WINDOW
$asmcmd
ASMCMD>ls
will show
DATA/
OCR/
exit

$dbca
Oracle Real Application Clusters (RAC) database
create database
general purpose
configuration type = admin managed
global database name = prod
sid prefix = prod
click on select all button to select both the nodes (rac1,rac2)
next
uncheck enterprise manager
enter a common password for sys and system users
Use common location for all database files
database files location = +DATA
Uncheck FRA
Memory  = custom SGA Size = 350  PGA Size = 150


No comments:

Post a Comment