This chapter describes the operating system tasks you must complete on your servers before you install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC). The values provided in this chapter are installation minimum only. Oracle recommends that you configure production systems in accordance with planned system loads.
Overview
In this practice you will perform required
pre-installation tasks for Oracle12c Grid Infrastructure Standard Cluster.
Tasks
You will perform various tasks that are required before
installing Oracle Grid Infrastructure.
1. Verify Network Configuration
Connect to the node1
[racuser@rachost102]$ ssh root@node1
Check the 3 network cards available on the each node
(node1 & node2)
[root@node1]# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:CA:3C:20
inet addr:192.168.1.21
Bcast:192.168.1.255
Mask:255.255.255.0
…
eth1 Link encap:Ethernet HWaddr 08:00:27:1F:30:9D
inet addr:192.168.2.21
Bcast:192.168.2.255
Mask:255.255.255.0
…
eth2 Link encap:Ethernet HWaddr 08:00:27:ED:5F:BF
inet addr:192.168.3.21 Bcast:192.168.3.255 Mask:255.255.255.0
Creating Users, Groups and Directories
2.
Create required Groups, Users and Directories on
both nodes.
(node1 & node2)
Create OS groups as root user
[root@node1]# groupadd -g 501 oinstall
[root@node1]# groupadd -g 502 dba
[root@node1]# groupadd -g 503 oper
[root@node1]# groupadd -g 504 backupdba
[root@node1]# groupadd -g 505 dgdba
[root@node1]# groupadd -g 506 kmdba
[root@node1]# groupadd -g 507 asmdba
[root@node1]# groupadd -g 508 asmoper
[root@node1]# groupadd -g 509 asmadmin
Check from /etc/group whether groups are created.
[root@node1] # cat /etc/group
Create users Grid, Oracle & set Passwords
[root@node1]# useradd -u 501 -g
oinstall –G dba,oper,asmdba,backupdba,kmdba,dgdba oracle
[root@node1]# useradd -u 502 -g oinstall -G
dba,asmoper,asmdba,asmadmin grid
[root@node1]# passwd oracle
(password:oracle)
[root@node1]# passwd grid
(password:grid)
Create required directories as root user
[root@node1]# mkdir -p /u01/app/12.1.0.2/
[root@node1]# mkdir -p /u01/app/oracle/product/12.1.0.2/
[root@node1]# chown -R grid:oinstall /u01
[root@node1]# chown –R oracle:oinstall /u01/app/oracle
[root@node1]# chmod -R 775 /u01/
NOTE: Repeat the above steps on NODE1 & NODE2
Partitioning Storage device
3.
Create Partitions for ASM on /dev/sdb device
Perform the below commands only on any
node or from 1st Node ( node1) .
Create Extended Partition & then inside create Logical
partitions, first partition with 6GB and remaining 11 partitions with 2GB in
size.
[root@node1]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
…
/dev/sda1 * 1 64 512000
83 Linux
/dev/sda2
64 5222 41430016
8e Linux LVM
…
Disk /dev/sdb:
30.7 GB, 30702305280 bytes
64 heads, 32 sectors/track, 29280 cylinders
Create extended partitions on shared storage disk :
/dev/sdb
[root@node1]# fdisk /dev/sdb
Command (m for help):m
Command (m for help):p ( check if partition table exists)
Command (m for help):n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4):1
First cylinder (1-29280, default 1): (Press Enter)
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3263, default
3263): (Press Enter)
Using default value 29280
--------------------------------------------------------------------------
Command (m for help):p
/dev/sdb1
1 29280
29982704 5 Extended
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (1-29280, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-29280, default
29280): +6G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (4098-29280, default 4098):
Using default value 4098
Last cylinder, +cylinders or +size{K,M,G} (4098-29280,
default 29280): +2G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (8195-29280, default 8195):
Using default value 8195
Last cylinder, +cylinders or +size{K,M,G} (8195-29280,
default 29280): +2G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (10244-29280, default 10244):
Using default value 10244
Last cylinder, +cylinders or +size{K,M,G} (10244-29280,
default 29280): +2G
...
...
...
******
Repeat the above steps multiple times upto 11 partitions (/dev/sdb16) each with size of +2G, and last save your partitions with command w
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Run the partx command,
which is used to update the kernel about presence and numbering of on-disk
partitions.
Run partx –a on node1 & node2
( run it for nearly 3 times even you get resource busy messages )
[root@node1]# partx -a /dev/sdb ( execute the same command 2 or
3 times)
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 5
BLKPG: Device or resource busy
-------
Note: the output we get on the screen is not an error so
ignore that.
4.
Verify your partitions created and available on Node1 & Node2
# fdisk –l
Disk /dev/sdb: 30.7 GB, 30702305280 bytes
64 heads, 32 sectors/track, 29280 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1d16e727
Device Boot
Start End Blocks
Id System
/dev/sdb1
1 29280 29982704
5 Extended
/dev/sdb5
1 6145 6292448
83 Linux
/dev/sdb6
6146 8195 2099184
83 Linux
/dev/sdb7
8196 10245 2099184
83 Linux
/dev/sdb8
10246 12295 2099184
83 Linux
/dev/sdb9
12296 14345 2099184
83 Linux
/dev/sdb10
14346 16395 2099184
83 Linux
/dev/sdb11
16396 18445 2099184
83 Linux
/dev/sdb12
18446 20495 2099184
83 Linux
/dev/sdb13
20496 22545 2099184
83 Linux
/dev/sdb14
22546 24595 2099184
83 Linux
/dev/sdb15
24596 26645 2099184
83 Linux
/dev/sdb16
26646 28695 2099184
83 Linux
Configuring ASM disks, OS Kernel & OS Services
5.
Configure Oracleasm and Create ASM Disks for
your Diskgroups
Perform the bellow steps on two nodes node1 & node2
[root@node1]# oracleasm configure -i
Configuring the Oracle ASM library driver.
Default user to own the driver interface [grid]: grid
Default group to own the driver interface [asmadmin]: asmadmin
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Perform these steps only on first nodes (node1)
[root@node1]# oracleasm init
Check Oracleasm modules status & create the ASM disks.
[root@node1]# lsmod | grep oracle
# oracleasm createdisk MGMTDISK1 /dev/sdb5
# oracleasm createdisk OCRVDDISK1 /dev/sdb6
# oracleasm createdisk OCRVDDISK2 /dev/sdb7
# oracleasm createdisk OCRVDDISK3 /dev/sdb8
# oracleasm createdisk DATADISK1 /dev/sdb9
# oracleasm createdisk DATADISK2 /dev/sdb10
# oracleasm createdisk DATADISK3 /dev/sdb11
# oracleasm createdisk DATADISK4 /dev/sdb12
# oracleasm createdisk DATADISK5 /dev/sdb13
# oracleasm createdisk DATADISK6 /dev/sdb14
# oracleasm createdisk FRADISK1 /dev/sdb15
# oracleasm createdisk FRADISK2 /dev/sdb16
Now perform below steps on both the nodes (node1 & node2)
[root@node1]# oracleasm init
[root@node1]# oracleasm scandisks
[root@node1]# oracleasm listdisks
[root@node2]# oracleasm init
[root@node2]# oracleasm scandisks
[root@node2]# oracleasm listdisks
6.
Editing Kernel parameters and System
Configuration
Add the security limits on all nodes node1 & node2
[root@node1]# vi /etc/security/limits.conf
oracle soft nofile
131072
oracle hard nofile
131072
oracle soft nproc
131072
oracle hard nproc
131072
oracle soft core
unlimited
oracle hard core
unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
grid soft nofile
131072
grid hard nofile
131072
grid soft nproc
131072
grid hard nproc
131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
Add the kernel parameters in /etc/sysctl.conf on Node1 & Node2
[root@node1]# vi /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Run the following command on all nodes to change the
current kernel parameters. ( Node1 & Node2)
[root@node1]# /sbin/sysctl -p
7.
Check and Disable NTP, as we like to use Oracle
Cluster Time Synchronization Service for our Cluster.
Backup the ntpd configuration file, Stop & disable NTP
service if its running. Perform the below steps on both nodes (node1 & node2)
# mv /etc/sysconfig/ntpd /etc/sysconfig/ntpd_bkup
# service ntpd stop
# chkconfig –-list | grep ntpd
# chkconfig ntpd off
8.
Configure NSCD service
As the root user, start the local naming cache
daemon on all 2 cluster nodes with the service nscd start command. To make sure
nscd starts at reboot, execute the
chkconfig nscd command. Perform these steps on node1
& node2
[root@node1]# service nscd start
Starting nscd:
[ OK ]
[root@node1]# chkconfig nscd on
[root@node1]# chkconfig --list|grep nscd
nscd
0:off 1:off 2:on
3:on 4:on 5:on
6:off
9.
Edit profile file
Check all entries in /etc/profile on all nodes node1 & node2
[root@node1 ]# cat /etc/profile
if [ $USER = "oracle" ] || [ $USER =
"grid" ]; then
umask 022
if [ $SHELL =
"/bin/ksh" ]; then
ulimit
-p 16384
ulimit
-n 65536
else
ulimit
-u 16384 -n 65536
fi
fi
10.
Verify
resolv.conf file pointing to DNS server 192.168.1.10
resolv.conf is the name of a computer file used in
various operating systems to configure the system's Domain Name System (DNS)
resolver. Check the entries in /etc/resolve.conf on node1 & node2
[root@node1]# cat
/etc/resolv.conf
# Generated by NetworkManager
options attempts:2
options timeout:1
search aks.com
nameserver 192.168.1.10
11.
Run racconfig script, select DNS option which
whill automatically edit dns files on Rachost Server.
[racuser@rachost102]$ racconfig
1) Clean the
CLUSTER Configuration
2) Configure DNS or
GNS Setup
Select option : 2
Choose your DNS
configuration :
1) DNS
2) GNS
3) exit
Select option : 1
Select exit & exit option, next Reboot your RACHOST server.
[racuser@rachost102]$ sudo reboot
12.
Verify Network & Storage configuration before
starting Grid Infrastructure Installation
Check DNS and ASM disks from both nodes (node1 & node2)
[root@node1]# nslookup scan-clust1.aks.com
Server:
192.168.1.10
Address: 192.168.1.10#53
Name:
scan-clust1.aks.com
Address: 192.168.1.26
Name:
scan-clust1.aks.com
Address: 192.168.1.27
Name:
scan-clust1.aks.com
Address: 192.168.1.25
[root@node1]#
[root@node1]#
[root@node1]# nslookup node1
Server:
192.168.1.10
Address:
192.168.1.10#53
Name: node1.aks.com
Address: 192.168.1.21
[root@node1]# nslookup node2
Server:
192.168.1.10
Address:
192.168.1.10#53
Name: node2.aks.com
Address: 192.168.1.22
[root@node1]# oracleasm listdisks
** (it should show you all oracle asm disks you created above )
Overview
In this practice you will install and configure a new Standard Cluster. You will install to two nodes; node1, node2
1.
On node1, go to software stage location
[grid@rac1]$ cd /stage/grid/
[grid@rac1 grid]$ ls
install rpm runInstaller
stage
response
runcluvfy.sh sshsetup welcome.html
2.
Execute runInstaller
[grid@rac1 grid]$ ./runInstaller
3.
On the Select
Installation Option screen, click Next to accept the default selection
(Install and Configure Oracle Grid Infrastructure for a Cluster).
4. On the Select Cluster Type screen, select Configure a Standard Cluster and click Next
5.
Select Advance installation, click Next
6.
Check the Selected language “English” is already
there, Click Next
7.
On Grid Plug and Play Page, enter
a) Cluster
Name : clust1
b)
Scan
Name : scan-clust1.aks.com
c)
Scan
Port : 1521
d) Uncheck Configure GNS option, then click Next
8.
On Cluster Node Information Page : add 2nd
node details and configure SSH connectivity between nodes
a) Click Add button, enter node2 details : node2.aks.com, node2-vip.aks.com
b)
Click the SSH
Connectivity button. Enter “grid” into the OS Password field and click
Setup to configure required SSH connectivity across the cluster.
Click OK, and Next button.
9.
Specify Network Interface page
Ensure that network interface eth0 is designated as the Public network and that network interface and eth1 as Private network. The eth2 interface should be “Do not use”. Click Next to continue.
10.
On Storage Option page,
Select “Use
Standard ASM for storage” , click next
11.
On the Create ASM Disk Group screen, Click
Change Discovery Path to customize the ASM Disk Discovery Path to /dev/oracleasm/disks,
click ok.
Enter Disk group name : MGMT, Redundancy as External, Add Disks : select MGMTDISK1 and click next
12.
On Specify ASM password page, select “Use same
passwords for these accounts” and enter “oracle_4U” password twice and click
next
13.
Select “Do not use Intelligent Platform
Management Interface (IPMI)” and click next
14.
On Enterprise manager register page, Leave it to
default “uncheck” and click next
15. OS Groups page : Leave it to the default and click next
16.
On the Specify Installation Location screen,
make sure the
Oracle base is /u01/app/grid
change the
Software location to /u01/app/12.1.0.2/grid
click next
17. On the Create Inventory screen: Accept the default /u01/app/oraInventory location, click next
18. On the Root script execution configuration screen: Check “Automatically run configuration scripts” box, select “Use root user credential” and enter “oracle” as the password, click next.
19.
Wait.. while a series of prerequisite checks are
performed.
20.
In Prerequisite checks, enable checkbox “Ignore
All” and click next
Related Course
Classroom
27% Off
Online
25% Off
Related Documents
Oracle 12c Standard Cluster Installation
This chapter describes the operating system tasks you must complete on your servers before you install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC). The values provided in this chapter are installation minimum only. Oracle recommends that you configure production systems in accordance with planned system loads.
Working with CLUVFY utility
The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.
Oracle 12c RAC Database Creation
DBCA enables you to create both policy-managed and administrator-managed databases. With DBCA, you can create site-specific tablespaces as part of database creation. If you have data file requirements that differ from those offered by DBCA templates, then create your database with DBCA and modify the data files later. You can also run user-specified scripts as part of your database creation process.