The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.
Overview
In this practice, you will work with CLUVFY to verify the
state of various cluster components.
1. Determine
the location of the cluvfy utility and its
configuration file.
[root@node1]# su
- grid
[grid@node1]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@node1]$ which cluvfy (To let you know the location of utility)
cluvfy
[grid@node1]$ cd $ORACLE_HOME/cv/admin
[grid@node1 admin]$ cat cvu_config
# Configuration file for Cluster Verification Utility(CVU)
# Version: 011405
#
# NOTE:
# 1._ Any line without a '=' will
be ignored
# 2._ Since the fallback option
will look into the environment variables,
# please have a
component prefix(CV_) for each property to define a
# namespace.
#
#Nodes for the cluster. If CRS home is not installed, this
list will be
#picked up when -n all is mentioned in the commandline argument.
#CV_NODE_ALL=
#if enabled, cvuqdisk rpm is
required on all nodes
CV_RAW_CHECK_ENABLED=TRUE
# Fallback to this distribution id
#CV_ASSUME_DISTID=OEL5
#Complete file system path of sudo binary file, default is /usr/local/bin/sudo
V_SUDO_BINARY_LOCATION=/usr/local/bin/sudo
#Complete file system path of pbrun binary file, default is /usr/local/bin/pbrun
CV_PBRUN_BINARY_LOCATION=/usr/local/bin/pbrun
# Whether X-Windows check should be
performed for user equivalence with SSH
#CV_XCHK_FOR_SSH_ENABLED=TRUE
# To override SSH location
#ORACLE_SRVM_REMOTESHELL=/usr/bin/ssh
# To override SCP location
#ORACLE_SRVM_REMOTECOPY=/usr/bin/scp
# To override version used by
command line parser
CV_ASSUME_CL_VERSION=12.1
# Location of the browser to be used to display HTML report
#CV_DEFAULT_BROWSER_LOCATION=/usr/bin/Mozilla
2. Display
the stage options and stage names that can be used with the cluvfy utility.
[grid@node1 admin]$ cluvfy
stage -list
USAGE:
cluvfy
stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
Valid Stages are:
-pre cfs : pre-check
for CFS setup
-pre crsinst :
pre-check for CRS installation
-pre acfscfg :
pre-check for ACFS Configuration.
-pre dbinst :
pre-check for database installation
-pre dbcfg :
pre-check for database configuration
-pre hacfg : pre-check for HA configuration
-pre nodeadd :
pre-check for node addition.
-post hwos :
post-check for hardware and operating system
-post cfs :
post-check for CFS setup
-post crsinst :
post-check for CRS installation
-post acfscfg :
post-check for ACFS Configuration.
-post hacfg :
post-check for HA configuration
-post nodeadd : post-check for node addition.
-post nodedel :
post-check for node deletion.
3. Perform
a postcheck for the ACFS configuration on all nodes
[grid@node1 admin]$ cluvfy
stage -post acfscfg -n all
Performing post-checks for ACFS Configuration
Checking node reachability...
Node reachability check passed from node "node1"
Checking user equivalence...
User equivalence check passed for user "grid"
Task ASM Integrity check started...
Starting check to see if ASM is running on all cluster
nodes...
ASM Running check passed. ASM is running on all specified
nodes
Disk Group Check passed. At least one Disk Group configured
Task ASM Integrity check passed...
Task ACFS Integrity check started...
Task ACFS Integrity check passed
UDev attributes check for ACFS
started...
UDev attributes check passed for
ACFS
Post-check for ACFS Configuration was successful.
4. Display
a list of the component names that can be checked with the cluvfy utility
[grid@node1 admin]$ cluvfy
comp -list
USAGE:
cluvfy
comp <component-name>
<component-specific options>
[-verbose]
Valid Components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage
accessibility
space :
checks space availability
sys :
checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
olr : checks OLR integrity
ha : checks HA integrity
freespace : checks free space in CRS Home
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer :
compares properties with peers
software :
checks software distribution
acfs : checks ACFS integrity
asm : checks ASM integrity
gpnp : checks GPnP
integrity
gns : checks GNS integrity
scan :
checks SCAN configuration
ohasd : checks OHASD integrity
clocksync : checks Clock Synchronization
vdisk : checks Voting Disk configuration
and UDEV settings
healthcheck : checks mandatory requirements and/or
best practice recommendations
dhcp : checks DHCP configuration
dns : checks DNS configuration
baseline :
collect and compare baselines
5. Display
the syntax usage help for the space component check of the cluvfy
utility
[grid@node1 admin]$ cluvfy
comp space -help
USAGE
cluvfy
comp space [-n <node_list>] -l <storage_location> -z <disk_space>{B|K|M|G } [-verbose]
<node_list> is the
comma-separated list of non-domain qualified node names on which the test
should be conducted. If "all" is specified, then all the nodes in the
cluster will be used for verification.
<storage_location> is the
storage path.
<disk_space> is the required
disk space, in units of bytes(B),kilobytes(K),megabytes(M)
or gigabytes(G).
DESCRIPTION:
Checks for free disk space at the location provided by '-l'
option on all the nodes in the nodelist. If no '-n'
option is given, local node is used for this check.
6. Verify
that on each node of the cluster the /tmp directory has at least 200 MB of free space in
it using the cluvfy utility. Use verbose output.
[grid@node1 admin]$ cluvfy
comp space -n node1,node2 -l /tmp -z 200M -verbose
Verifying space availability
Checking space availability...
Check: Space available on "/tmp"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
node2 15.8294GB (1.6598324E7KB) 200MB
(204800.0KB) passed
node1 15.6033GB (1.6361288E7KB) 200MB
(204800.0KB) passed
Result: Space availability check passed for "/tmp"
Verification of space availability was successful.
Test
Related Course
Classroom
27% Off
Online
25% Off
Related Documents
Oracle 12c Standard Cluster Installation
This chapter describes the operating system tasks you must complete on your servers before you install Oracle Grid Infrastructure for a cluster and Oracle Real Application Clusters (Oracle RAC). The values provided in this chapter are installation minimum only. Oracle recommends that you configure production systems in accordance with planned system loads.
Working with CLUVFY utility
The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.
Oracle 12c RAC Database Creation
DBCA enables you to create both policy-managed and administrator-managed databases. With DBCA, you can create site-specific tablespaces as part of database creation. If you have data file requirements that differ from those offered by DBCA templates, then create your database with DBCA and modify the data files later. You can also run user-specified scripts as part of your database creation process.