AWS Administration - 100% Lab - Online Training | AKSWAVE

Oracle 12c Standard Cluster Installation

  • 4.8
  • Reviews : 893

This chapter describes the operating system tasks you must complete on your servers before you install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC). The values provided in this chapter are installation minimum only. Oracle recommends that you configure production systems in accordance with planned system loads.


In this practice you will perform required pre-installation tasks for Oracle12c Grid Infrastructure Standard Cluster.


You will perform various tasks that are required before installing Oracle Grid Infrastructure.

1.    Verify Network Configuration

Connect to the node1

[[email protected]]$ ssh [email protected]

Check the 3 network cards available on the each node (node1 & node2)


[[email protected]]# ifconfig

eth0      Link encap:Ethernet  HWaddr 08:00:27:CA:3C:20

          inet addr:  Bcast:  Mask:

eth1      Link encap:Ethernet  HWaddr 08:00:27:1F:30:9D

          inet addr:  Bcast:  Mask:

eth2      Link encap:Ethernet  HWaddr 08:00:27:ED:5F:BF

          inet addr:  Bcast:  Mask:

Creating Users, Groups and Directories

2.       Create required Groups, Users and Directories on both nodes.  (node1 & node2)

Create OS groups as root user

[[email protected]]# groupadd -g 501 oinstall

[[email protected]]# groupadd -g 502 dba

[[email protected]]# groupadd -g 503 oper

[[email protected]]# groupadd -g 504 backupdba

[[email protected]]# groupadd -g 505 dgdba

[[email protected]]# groupadd -g 506 kmdba

[[email protected]]# groupadd -g 507 asmdba

[[email protected]]# groupadd -g 508 asmoper

[[email protected]]# groupadd -g 509 asmadmin

Check from /etc/group whether groups are created.

 [[email protected]] # cat /etc/group


Create users Grid, Oracle & set Passwords

[[email protected]]# useradd -u 501 -g oinstall –G dba,oper,asmdba,backupdba,kmdba,dgdba oracle

[[email protected]]# useradd -u 502 -g oinstall -G dba,asmoper,asmdba,asmadmin grid

[[email protected]]# passwd oracle  (password:oracle)

[[email protected]]# passwd grid    (password:grid)


Create required directories as root user

[[email protected]]# mkdir -p /u01/app/

[[email protected]]# mkdir -p /u01/app/oracle/product/

[[email protected]]# chown -R grid:oinstall /u01

[[email protected]]# chown –R oracle:oinstall /u01/app/oracle

[[email protected]]# chmod -R 775 /u01/

NOTE:  Repeat the above steps on NODE1 & NODE2

Partitioning Storage device

3.       Create Partitions for ASM on /dev/sdb device

Perform the below commands only on any node or from 1st Node  ( node1) .

Create Extended Partition & then inside create Logical partitions, first partition with 6GB and remaining 11 partitions with 2GB in size.


[[email protected]]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes

/dev/sda1   *           1          64      512000   83  Linux

/dev/sda2              64        5222    41430016   8e  Linux LVM

Disk /dev/sdb: 30.7 GB, 30702305280 bytes

64 heads, 32 sectors/track, 29280 cylinders


Create extended partitions on shared storage disk : /dev/sdb

[[email protected]]# fdisk /dev/sdb

Command (m for help):m

Command (m for help):p ( check if partition table exists)

Command (m for help):n

Command action

   e   extended

   p   primary partition (1-4)


Partition number (1-4):1

First cylinder (1-29280, default 1):  (Press Enter)

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-3263, default 3263):  (Press Enter)

Using default value 29280


Command (m for help):p

/dev/sdb1               1        29280    29982704    5  Extended

Command (m for help): n

Command action

   l   logical (5 or over)

   p   primary partition (1-4)


First cylinder (1-29280, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-29280, default 29280): +6G


Command (m for help): n

Command action

   l   logical (5 or over)

   p   primary partition (1-4)


First cylinder (4098-29280, default 4098):

Using default value 4098

Last cylinder, +cylinders or +size{K,M,G} (4098-29280, default 29280): +2G


Command (m for help): n

Command action

   l   logical (5 or over)

   p   primary partition (1-4)


First cylinder (8195-29280, default 8195):

Using default value 8195

Last cylinder, +cylinders or +size{K,M,G} (8195-29280, default 29280): +2G


Command (m for help): n

Command action

   l   logical (5 or over)

   p   primary partition (1-4)


First cylinder (10244-29280, default 10244):

Using default value 10244

Last cylinder, +cylinders or +size{K,M,G} (10244-29280, default 29280): +2G




  ******  Repeat the above steps multiple times upto 11 partitions (/dev/sdb16) each with size of +2G, and last save your partitions with command w


Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Run the  partx command, which is used to update the kernel about presence and numbering of on-disk partitions.

Run partx –a on node1 & node2   ( run it for nearly 3 times even you get resource busy messages )


[[email protected]]# partx -a /dev/sdb ( execute the same command 2 or 3 times)

BLKPG: Device or resource busy

error adding partition 1

BLKPG: Device or resource busy

error adding partition 5

BLKPG: Device or resource busy


Note: the output we get on the screen is not an error so ignore that.

4.       Verify your partitions created and available on Node1 & Node2

# fdisk –l

Disk /dev/sdb: 30.7 GB, 30702305280 bytes

64 heads, 32 sectors/track, 29280 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x1d16e727


Device Boot         Start         End      Blocks   Id  System

/dev/sdb1               1       29280    29982704    5  Extended

/dev/sdb5               1        6145     6292448   83  Linux

/dev/sdb6            6146        8195     2099184   83  Linux

/dev/sdb7            8196       10245     2099184   83  Linux

/dev/sdb8           10246       12295     2099184   83  Linux

/dev/sdb9           12296       14345     2099184   83  Linux

/dev/sdb10          14346       16395     2099184   83  Linux

/dev/sdb11          16396       18445     2099184   83  Linux

/dev/sdb12          18446       20495     2099184   83  Linux

/dev/sdb13          20496       22545     2099184   83  Linux

/dev/sdb14          22546       24595     2099184   83  Linux

/dev/sdb15          24596       26645     2099184   83  Linux

/dev/sdb16          26646       28695     2099184   83  Linux



Configuring ASM disks, OS Kernel & OS Services

5.       Configure Oracleasm and Create ASM Disks for your Diskgroups

Perform the bellow steps on two nodes node1 & node2

[[email protected]]# oracleasm configure -i

Configuring the Oracle ASM library driver.

Default user to own the driver interface [grid]: grid

Default group to own the driver interface [asmadmin]: asmadmin

Start Oracle ASM library driver on boot (y/n) [y]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done


Perform these steps only on first nodes (node1)

[[email protected]]# oracleasm init

Check Oracleasm modules status & create the ASM disks.   

[[email protected]]# lsmod | grep oracle

# oracleasm createdisk MGMTDISK1  /dev/sdb5

# oracleasm createdisk OCRVDDISK1 /dev/sdb6

# oracleasm createdisk OCRVDDISK2 /dev/sdb7

# oracleasm createdisk OCRVDDISK3 /dev/sdb8

# oracleasm createdisk DATADISK1  /dev/sdb9

# oracleasm createdisk DATADISK2  /dev/sdb10

# oracleasm createdisk DATADISK3  /dev/sdb11

# oracleasm createdisk DATADISK4  /dev/sdb12

# oracleasm createdisk DATADISK5  /dev/sdb13

# oracleasm createdisk DATADISK6  /dev/sdb14

# oracleasm createdisk FRADISK1   /dev/sdb15

# oracleasm createdisk FRADISK2   /dev/sdb16

Now perform below steps on both the nodes (node1 & node2)

[[email protected]]# oracleasm init

[[email protected]]# oracleasm scandisks

[[email protected]]# oracleasm listdisks

[[email protected]]# oracleasm init

[[email protected]]# oracleasm scandisks

[[email protected]]# oracleasm listdisks

6.       Editing Kernel parameters and System Configuration

Add the security limits on all nodes node1 & node2

[[email protected]]# vi /etc/security/limits.conf

oracle  soft    nofile  131072

oracle  hard    nofile  131072

oracle  soft    nproc   131072

oracle  hard    nproc   131072

oracle  soft    core    unlimited

oracle  hard    core    unlimited

oracle  soft    memlock 3500000

oracle  hard    memlock 3500000


grid    soft    nofile  131072

grid    hard    nofile  131072

grid    soft    nproc   131072

grid    hard    nproc   131072

grid    soft    core         unlimited

grid    hard    core         unlimited

grid    soft    memlock      3500000

grid    hard    memlock      3500000


Add the kernel parameters in /etc/sysctl.conf on Node1 & Node2

[[email protected]]# vi /etc/sysctl.conf

fs.file-max = 6815744

kernel.sem = 250 32000 100 128

kernel.shmmni = 4096

kernel.shmall = 1073741824

kernel.shmmax = 4398046511104

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500


Run the following command on all nodes to change the current kernel parameters. ( Node1 & Node2)

[[email protected]]# /sbin/sysctl -p

7.       Check and Disable NTP, as we like to use Oracle Cluster Time Synchronization Service for our Cluster.

Backup the ntpd configuration file, Stop & disable NTP service if its running. Perform the below steps on both nodes (node1 & node2)

# mv /etc/sysconfig/ntpd /etc/sysconfig/ntpd_bkup

# service ntpd stop

# chkconfig –-list | grep ntpd  

# chkconfig ntpd off

8.       Configure NSCD service

 As the root user, start the local naming cache daemon on all 2 cluster nodes with the service nscd start command. To make sure nscd starts at reboot, execute the chkconfig nscd command. Perform these steps on node1 & node2

[[email protected]]# service nscd start

Starting nscd:                                             [  OK  ]

[[email protected]]# chkconfig nscd on

[[email protected]]# chkconfig --list|grep nscd

nscd            0:off   1:off   2:on    3:on    4:on    5:on    6:off

9.       Edit profile file

Check all entries in /etc/profile on all nodes node1 & node2

 [[email protected] ]# cat  /etc/profile

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

    umask 022

         if [ $SHELL = "/bin/ksh" ]; then

               ulimit -p 16384

               ulimit -n 65536


               ulimit -u 16384 -n 65536




10.   Verify resolv.conf file pointing to DNS server

resolv.conf is the name of a computer file used in various operating systems to configure the system's Domain Name System (DNS) resolver. Check the entries in /etc/resolve.conf on node1 & node2


[[email protected]]# cat /etc/resolv.conf

# Generated by NetworkManager

options attempts:2

options timeout:1





11.   Run racconfig script, select DNS option which whill automatically edit dns files on Rachost Server.

[[email protected]]$ racconfig

    1) Clean the CLUSTER Configuration

    2) Configure DNS or GNS Setup


       Select option : 2


  Choose your DNS configuration :

         1) DNS

         2) GNS

         3) exit


        Select option : 1


Select exit & exit option, next Reboot your RACHOST server.


[[email protected]]$ sudo reboot

12.   Verify Network & Storage configuration before starting Grid Infrastructure Installation

Check DNS and ASM disks from both nodes (node1 & node2)

[[email protected]]# nslookup










[[email protected]]#

[[email protected]]#

[[email protected]]# nslookup node1






[[email protected]]# nslookup node2






[[email protected]]# oracleasm listdisks

          ** (it should show you all oracle asm disks you created above )



Practice 1-2: Configuring Standard Cluster in 12c


In this practice you will install and configure a new Standard Cluster. You will install to two nodes; node1, node2

1.       On node1, go to software stage location

[[email protected]]$ cd /stage/grid/

[[email protected] grid]$ ls

install   rpm           runInstaller  stage

response  sshsetup      welcome.html      

2.       Execute runInstaller

[[email protected] grid]$ ./runInstaller

3.       On the Select Installation Option screen, click Next to accept the default selection

(Install and Configure Oracle Grid Infrastructure for a Cluster).

4.       On the Select Cluster Type screen, select Configure a Standard Cluster and click Next

5.       Select Advance installation, click Next

6.       Check the Selected language “English” is already there, Click Next

7.       On Grid Plug and Play Page, enter

a)       Cluster Name : clust1

b)       Scan Name :

c)       Scan Port : 1521

d)       Uncheck Configure GNS option, then click Next


8.       On Cluster Node Information Page : add 2nd node details and configure SSH connectivity between nodes

a)    Click Add button,  enter node2 details :

b)    Click the SSH Connectivity button. Enter “grid” into the OS Password field and click Setup to configure required SSH connectivity across the cluster.

Click OK, and Next button.

9.       Specify Network Interface page

Ensure that network interface eth0 is designated as the Public network and that network interface and eth1 as Private network. The eth2 interface should be “Do not use”. Click Next to continue.

10.   On Storage Option page,

Select “Use Standard ASM for storage” , click next

11.   On the Create ASM Disk Group screen, Click Change Discovery Path to customize the ASM Disk Discovery Path to /dev/oracleasm/disks, click ok.

Enter Disk group name : MGMT, Redundancy as External, Add Disks : select MGMTDISK1 and click next

12.   On Specify ASM password page, select “Use same passwords for these accounts” and enter “oracle_4U” password twice and click next

13.   Select “Do not use Intelligent Platform Management Interface (IPMI)” and click next

14.   On Enterprise manager register page, Leave it to default “uncheck” and click next

15.   OS Groups page : Leave it to the default and click next

16.   On the Specify Installation Location screen,

      make sure the Oracle base is /u01/app/grid

      change the Software location to /u01/app/

      click next

17.   On the Create Inventory screen:  Accept the default  /u01/app/oraInventory location, click next

18.   On the Root script execution configuration screen:  Check “Automatically run configuration scripts” box, select “Use root user credential” and enter “oracle” as the password, click next.

19.   Wait.. while a series of prerequisite checks are performed.

20.   In Prerequisite checks, enable checkbox “Ignore All” and click next

Akal Singh
Oracle Certified Master, 20+ yrs exp
Akal Singh stands at the forefront of the fastest moving technologies in IT Industry. He spent his past 20 years as Oracle DBA with skills into DBA Support, High Availability Design & Implementations, Technical Solutions, Automation using Scripting, Database Designing & as a Corporate Trainer too. With deep technical industry knowledge, Akal Singh has implemented many real time projects on advance Database areas.

His certification list includes many OCP, Oracle Certified Expert/Specialist (OCE) and Oracle Certified Master (OCM). He is an expertise in OS Administrations, Virtualizations/VMWare and Oracle Database 8i/9i/10g/11g & 12c, RAC, Data Guard, ASM, Oracle Exadata, Oracle Performance Tuning, Golden Gate, Streams, Oracle Security & many more advance technologies.

Akal Singh is also recognized as Senior Corporate Instructor and worked with Oracle University training division providing more than 300 corporate trainings into database advance concepts.

Certifications include :
  • Oracle Certified Professional (OCP) 9i
  • Oracle Certified Professional (OCP) 10g
  • Oracle Certified Professional (OCP) 11g
  • Oracle 10g Certified RAC Expert (OCE)
  • Oracle 11g Certified Expert (RAC) and Grid Infrastructure (OCE)
  • Oracle 10g Certified Master (OCM)
  • Oracle 11g Exadata Certified Implementation Specialist
  • Oracle Database 12c: RAC and Grid Infrastructure Administrator
  • Oracle Exadata X5 Administration
  • Oracle RAC 11g Release 2 and Grid Infrastructure Administration
  • Foundation Certificate in IT Service Management  ( ITIL Certificate ) 

Related Course

RAC Administration

( 5)
Reviews : 2,734




14% Off




9% Off

Related Documents

Oracle 12c Standard Cluster Installation

This chapter describes the operating system tasks you must complete on your servers before you install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC). The values provided in this chapter are installation minimum only. Oracle recommends that you configure production systems in accordance with planned system loads.

Working with CLUVFY utility

The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.

Oracle 12c RAC Database Creation

DBCA enables you to create both policy-managed and administrator-managed databases. With DBCA, you can create site-specific tablespaces as part of database creation. If you have data file requirements that differ from those offered by DBCA templates, then create your database with DBCA and modify the data files later. You can also run user-specified scripts as part of your database creation process.


Post a Comment