Grid Infrastructure Installation and Upgrade Guide Oracle Solaris
Grid Infrastructure Installation and Upgrade Guide Oracle Solaris
Grid Infrastructure Installation and Upgrade Guide Oracle Solaris
E84127-04
Copyright © 2014, 2019, Oracle and/or its affiliates. All rights reserved.
Contributors: Mark Bauer, Jonathan Creighton, Mark Fuller, Rajesh Dasari, Pallavi Kamath, Donald Graves,
Dharma Sirnapalli, Allan Graves, Barbara Glover, Aneesh Khandelwal, Saar Maoz, Markus Michalewicz, Ian
Cookson, Robert Bart, Lisa Shepherd, James Spiller, Binoy Sukumaran, Preethi Vallam, Neha Avasthy, Peter
Wahl
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience xvi
Documentation Accessibility xvi
Set Up Java Access Bridge to Implement Java Accessibility xvii
Related Documentation xvii
Conventions xvii
iii
Installing the Oracle Database Prerequisites Packages for Oracle Solaris 3-3
iv
5 Configuring Networks for Oracle Grid Infrastructure and Oracle
RAC
About Oracle Grid Infrastructure Network Configuration Options 5-2
Understanding Network Addresses 5-2
About the Public IP Address 5-3
About the Private IP Address 5-3
About the Virtual IP Address 5-4
About the Grid Naming Service (GNS) Virtual IP Address 5-4
About the SCAN 5-5
About Shared SCAN 5-5
Network Interface Hardware Minimum Requirements 5-6
Private IP Interface Configuration Requirements 5-7
IPv4 and IPv6 Protocol Requirements 5-8
Oracle Grid Infrastructure IP Name and Address Requirements 5-9
About Oracle Grid Infrastructure Name Resolution Options 5-10
Cluster Name and SCAN Requirements 5-11
IP Name and Address Requirements For Grid Naming Service (GNS) 5-11
IP Name and Address Requirements For Multi-Cluster GNS 5-11
About Multi-Cluster GNS Networks 5-12
Configuring GNS Server Clusters 5-12
Configuring GNS Client Clusters 5-12
Creating and Using a GNS Client Data File 5-13
IP Name and Address Requirements for Manual Configuration of Cluster 5-13
Confirming the DNS Configuration for SCAN 5-15
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure 5-15
Multicast Requirements for Networks Used by Oracle Grid Infrastructure 5-16
Domain Delegation to Grid Naming Service 5-16
Choosing a Subdomain Name for Use with Grid Naming Service 5-16
Configuring DNS for Cluster Domain Delegation to Grid Naming Service 5-17
Configuration Requirements for Oracle Flex Clusters 5-18
Understanding Oracle Flex Clusters 5-18
About Oracle Flex ASM Clusters Networks 5-19
General Requirements for Oracle Flex Cluster Configuration 5-21
Oracle Flex Cluster DHCP-Assigned Virtual IP (VIP) Addresses 5-21
Oracle Flex Cluster Manually-Assigned Addresses 5-22
Grid Naming Service Cluster Configuration Example 5-22
Manual IP Address Configuration Example 5-24
Network Interface Configuration Options 5-25
v
6 Configuring Users, Groups and Environments for Oracle Grid
Infrastructure and Oracle Database
Creating Groups, Users and Paths for Oracle Grid Infrastructure 6-1
Determining If an Oracle Inventory and Oracle Inventory Group Exist 6-2
Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist 6-3
About Oracle Installation Owner Accounts 6-3
Restrictions for Oracle Software Installation Owners 6-4
Identifying an Oracle Software Owner User Account 6-5
About the Oracle Base Directory for the grid User 6-5
About the Oracle Home Directory for Oracle Grid Infrastructure Software 6-6
Oracle Installations with Standard and Job Role Separation Groups and Users 6-7
About Oracle Installations with Job Role Separation 6-8
Standard Oracle Database Groups for Database Administrators 6-9
Extended Oracle Database Groups for Job Role Separation 6-9
Creating an ASMSNMP User 6-10
Oracle Automatic Storage Management Groups for Job Role Separation 6-10
Creating Operating System Privileges Groups 6-11
Creating the OSASM Group 6-12
Creating the OSDBA for ASM Group 6-12
Creating the OSOPER for ASM Group 6-12
Creating the OSDBA Group for Database Installations 6-13
Creating an OSOPER Group for Database Installations 6-13
Creating the OSBACKUPDBA Group for Database Installations 6-13
Creating the OSDGDBA Group for Database Installations 6-14
Creating the OSKMDBA Group for Database Installations 6-14
Creating the OSRACDBA Group for Database Installations 6-14
Creating Operating System Oracle Installation User Accounts 6-14
Creating an Oracle Software Owner User 6-15
Modifying Oracle Owner User Groups 6-15
Identifying Existing User and Group IDs 6-16
Creating Identical Database Users and Groups on Other Cluster Nodes 6-16
Example of Creating Role-allocated Groups, Users, and Paths 6-17
Example of Creating Minimal Groups, Users, and Paths 6-21
Configuring Grid Infrastructure Software Owner User Environments 6-22
Environment Requirements for Oracle Software Owners 6-23
Procedure for Configuring Oracle Software Owner Environments 6-23
Checking Resource Limits for Oracle Software Installation Users 6-26
Setting Remote Display and X11 Forwarding Configuration 6-27
Preventing Installation Errors Caused by Terminal Output Commands 6-28
About Using Oracle Solaris Projects 6-29
vi
Enabling Intelligent Platform Management Interface (IPMI) 6-29
Requirements for Enabling IPMI 6-29
Configuring the IPMI Management Network 6-30
Configuring the BMC 6-30
vii
Enabling and Disabling Direct NFS Client Control of NFS 8-22
Enabling Hybrid Columnar Compression on Direct NFS Client 8-23
Creating Member Cluster Manifest File for Oracle Member Clusters 8-23
Configuring Oracle Automatic Storage Management Cluster File System 8-24
viii
Setting Resource Limits for Oracle Clusterware and Associated Databases and
Applications 10-7
About Changes in Default SGA Permissions for Oracle Database 10-8
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure 10-8
General Restrictions for Using Earlier Oracle Database Releases 10-9
Configuring Earlier Release Oracle Database on Oracle ACFS 10-9
Managing Server Pools with Earlier Database Versions 10-10
Making Oracle ASM Available to Earlier Oracle Database Releases 10-11
Using ASMCA to Administer Disk Groups for Earlier Database Releases 10-11
Using the Correct LSNRCTL Commands 10-12
Modifying Oracle Clusterware Binaries After Installation 10-12
ix
Unlocking and Deinstalling the Previous Release Grid Home 11-22
Checking Cluster Health Monitor Repository Size After Upgrading 11-23
Downgrading Oracle Clusterware to an Earlier Release 11-23
Options for Oracle Grid Infrastructure Downgrades 11-25
Restrictions for Oracle Grid Infrastructure Downgrades 11-25
Downgrading Oracle Standalone Cluster to 12c Release 2 (12.2) 11-26
Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2) 11-27
Downgrading Oracle Member Cluster to 12c Release 2 (12.2) 11-30
Downgrading Oracle Grid Infrastructure to 12c Release 2 (12.2) when Upgrade
Fails 11-32
Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1) 11-33
Downgrading to Oracle Grid Infrastructure 11g Release 2 (11.2) 11-35
Downgrading Oracle Grid Infrastructure Using Online Abort Upgrade 11-36
Completing Failed or Interrupted Installations and Upgrades 11-37
Completing Failed Installations and Upgrades 11-38
Continuing Incomplete Upgrade of First Nodes 11-39
Continuing Incomplete Upgrades on Remote Nodes 11-39
Continuing Incomplete Installation on First Node 11-39
Continuing Incomplete Installation on Remote Nodes 11-40
Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure 11-41
x
Running Configuration Assistants Using Response Files A-7
Running Oracle DBCA Using Response Files A-8
Running Net Configuration Assistant Using Response Files A-9
Postinstallation Configuration Using Response File Created During Installation A-10
Using the Installation Response File for Postinstallation Configuration A-10
Running Postinstallation Configuration Using Response File A-11
Postinstallation Configuration Using the ConfigToolAllCommands Script A-13
About the Postinstallation Configuration File A-13
Creating a Password Response File A-14
Running Postinstallation Configuration Using a Password Response File A-15
xi
Optimal Flexible Architecture File Path Examples D-5
Index
xii
List of Figures
9-1 Oracle Cluster Domain 9-5
xiii
List of Tables
1-1 Server Hardware Checklist for Oracle Grid Infrastructure 1-2
1-2 Operating System General Checklist for Oracle Grid Infrastructure on Oracle Solaris 1-3
1-3 Server Configuration Checklist for Oracle Grid Infrastructure 1-3
1-4 Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC 1-5
1-5 User Environment Configuration for Oracle Grid Infrastructure 1-8
1-6 Oracle Grid Infrastructure Storage Configuration Checks 1-9
1-7 Oracle Grid Infrastructure Cluster Deployment Checklist 1-11
1-8 Oracle Universal Installer Checklist for Oracle Grid Infrastructure Installation 1-11
4-1 Oracle Solaris 11 Releases for SPARC (64-Bit) Minimum Operating System
Requirements 4-6
4-2 Oracle Solaris 11 Releases for SPARC (64-Bit) Minimum Operating System
Requirements for Oracle Solaris Cluster 4-7
4-3 Oracle Solaris 10 Releases for SPARC (64-Bit) Minimum Operating System
Requirements 4-7
4-4 Oracle Solaris 10 Releases for SPARC (64-Bit) Minimum Operating System
Requirements for Oracle Solaris Cluster 4-8
4-5 Oracle Solaris 11 Releases for x86-64 (64-Bit) Minimum Operating System Requirements 4-9
4-6 Oracle Solaris 11 Releases for x86-64 (64-Bit) Minimum Operating System
Requirements for Oracle Solaris Cluster 4-10
4-7 Oracle Solaris 10 Releases for x86-64 (64-Bit) Minimum Operating System Requirements 4-10
4-8 Oracle Solaris 10 Releases for x86-64 (64-Bit) Minimum Operating System
Requirements for Oracle Solaris Cluster 4-11
4-9 Requirements for Programming Environments for Oracle Solaris 4-13
5-1 Grid Naming Service Cluster Configuration Example 5-23
5-2 Manual Network Configuration Example 5-24
6-1 Installation Owner Resource Limit Recommended Ranges 6-26
7-1 Supported Storage Options for Oracle Grid Infrastructure 7-2
7-2 Platforms That Support Oracle ACFS and Oracle ADVM 7-4
8-1 Oracle ASM Disk Space Minimum Requirements for Oracle Database 8-7
8-2 Oracle ASM Disk Space Minimum Requirements for Oracle Database (non-CDB) 8-7
8-3 Minimum Available Space Requirements for Oracle Standalone Cluster 8-8
8-4 Minimum Available Space Requirements for Oracle Member Cluster with Local ASM 8-8
8-5 Minimum Available Space Requirements for Oracle Domain Services Cluster 8-9
9-1 Image-Creation Options for Setup Wizard 9-3
xiv
9-2 Oracle ASM Disk Group Redundancy Levels for Oracle Extended Clusters with 2
Data Sites 9-7
11-1 Upgrade Checklist for Oracle Grid Infrastructure Installation 11-6
A-1 Response Files for Oracle Database and Oracle Grid Infrastructure A-4
B-1 Minimum Oracle Solaris Resource Control Parameter Settings B-6
B-2 Requirement for Resource Control project.max-shm-memory B-7
B-3 Granule Size for SGA Values B-8
B-4 Oracle Solaris Shell Limit Recommended Ranges B-11
D-1 Examples of OFA-Compliant Oracle Base Directory Names D-4
D-2 Optimal Flexible Architecture Hierarchical File Path Examples D-6
xv
Preface
Preface
This guide explains how to configure a server in preparation for installing and
configuring an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle
Automatic Storage Management).
It also explains how to configure a server and storage in preparation for an Oracle
Real Application Clusters (Oracle RAC) installation.
• Audience
• Documentation Accessibility
• Set Up Java Access Bridge to Implement Java Accessibility
Install Java Access Bridge so that assistive technologies on Microsoft Windows
systems can use the Java Accessibility API.
• Related Documentation
• Conventions
Audience
This guide provides configuration information for network and system administrators,
and database installation information for database administrators (DBAs) who install
and configure Oracle Clusterware and Oracle Automatic Storage Management in an
Oracle Grid Infrastructure for a cluster installation.
For users with specialized system roles who intend to install Oracle RAC, this book is
intended to be used by system administrators, network administrators, or storage
administrators to configure a system in preparation for an Oracle Grid Infrastructure for
a cluster installation, and complete all configuration tasks that require operating
system root privileges. When Oracle Grid Infrastructure installation and configuration is
completed successfully, a system administrator should only need to provide
configuration information and to grant access to the database administrator to run
scripts as root during an Oracle RAC installation.
This guide assumes that you are familiar with Oracle Database concepts.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
xvi
Preface
Related Documentation
For more information, see the following Oracle resources:
Related Topics
• Oracle Real Application Clusters Installation Guide for Linux and UNIX
• Oracle Database Installation Guide
• Oracle Clusterware Administration and Deployment Guide
• Oracle Real Application Clusters Administration and Deployment Guide
• Oracle Database Concepts
• Oracle Database New Features Guide
• Oracle Database Licensing Information
• Oracle Database Release Notes
• Oracle Database Examples Installation Guide
• Oracle Database Administrator's Reference for Linux and UNIX-Based Operating
Systems
• Oracle Automatic Storage Management Administrator's Guide
• Oracle Database Upgrade Guide
• Oracle Database 2 Day DBA
• Oracle Application Express Installation Guide
Conventions
The following text conventions are used in this document:
xvii
Preface
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xviii
Changes in This Release for Oracle Grid
Infrastructure
This new release of Oracle Grid Infrastructure provides improvements to the
installation process, performance, and automation.
• Changes in Oracle Grid Infrastructure 18c
New Features
New features for Oracle Grid Infrastructure 18c.
Following are the new features for Oracle Clusterware 18c and Oracle Automatic
Storage Management 18c:
xix
Changes in This Release for Oracle Grid Infrastructure
xx
Changes in This Release for Oracle Grid Infrastructure
xxi
Changes in This Release for Oracle Grid Infrastructure
acfsutil freeze command, you can create point-in-time images across different
snapshots without stopping your applications. Cross-node communication ensures
that all nodes perform a freeze operation. During the freeze, each node stops all
modification operations on the specified file system, flushes user data and
metadata, commits the data to disk, and then acknowledges when the operations
are completed.
• Enhancements to Oracle ACFS Diagnostic Commands
Oracle ACFS diagnostic commands, such as acfsutil meta and acfsutil tune,
have been updated to provide improved diagnostic management of Oracle ACFS
system metadata collection and the tuning of Oracle ACFS parameters.
• Converting Normal or High Redundancy Disk Groups to Flex Disk Groups
without Restricted Mount
You can convert a conventional disk group (disk group created before Oracle ASM
release 18c) to an Oracle ASM flex disk group without using the restrictive mount
(MOUNTED RESTRICTED) option.
For more information, see Oracle Automatic Storage Management Administrator's
Guide
Deprecated Features
Deprecated features for Oracle Grid Infrastructure 18c.
The following feature is deprecated in this release, and may be desupported in another
release. See Oracle Database Upgrade Guide for a complete list of deprecated
features in this release.
• Flex Cluster (Hub/Leaf) Architecture
Starting with Oracle Database 18c, Leaf nodes are deprecated as part of Oracle
Flex Cluster architecture.
With continuous improvements in the Oracle Clusterware stack towards providing
shorter reconfiguration times in case of a failure, Leaf nodes are no longer
necessary for implementing clusters that meet customer needs, either for on-
premises, or in the Cloud.
xxii
1
Oracle Grid Infrastructure Installation
Checklist
Use checklists to plan and carry out Oracle Grid Infrastructure (Oracle Clusterware
and Oracle Automatic Storage Management) installation.
Oracle recommends that you use checklists as part of your installation planning
process. Using this checklist can help you to confirm that your server hardware and
configuration meet minimum requirements for this release, and to ensure you carry out
a successful installation.
• Server Hardware Checklist for Oracle Grid Infrastructure
Review server hardware requirements for Oracle Grid Infrastructure installation.
• Operating System Checklist for Oracle Grid Infrastructure on Oracle Solaris
Use this checklist to check minimum operating system requirements for Oracle
Database.
• Server Configuration Checklist for Oracle Grid Infrastructure
Use this checklist to check minimum server configuration requirements for Oracle
Grid Infrastructure installations.
• Network Checklist for Oracle Grid Infrastructure
Review this network checklist for Oracle Grid Infrastructure installation to ensure
that you have required hardware, names, and addresses for the cluster.
• User Environment Configuration Checklist for Oracle Grid Infrastructure
Use this checklist to plan operating system users, groups, and environments for
Oracle Grid Infrastructure installation.
• Storage Checklist for Oracle Grid Infrastructure
Review the checklist for storage hardware and configuration requirements for
Oracle Grid Infrastructure installation.
• Cluster Deployment Checklist for Oracle Grid Infrastructure
Review the checklist for planning your cluster deployment Oracle Grid
Infrastructure installation.
• Installer Planning Checklist for Oracle Grid Infrastructure
Review the checklist for planning your Oracle Grid Infrastructure installation before
starting Oracle Universal Installer.
1-1
Chapter 1
Operating System Checklist for Oracle Grid Infrastructure on Oracle Solaris
Check Task
Server make Confirm that server makes, models, core architecture, and host bus adaptors
and (HBA) are supported to run with Oracle Grid Infrastructure and Oracle RAC.
architecture
Runlevel 3
Server Display At least 1024 x 768 display resolution for Oracle Universal Installer. Confirm
Cards display monitor.
Minimum At least 8 GB RAM for Oracle Grid Infrastructure installations.
Random
Access
Memory
(RAM)
Intelligent IPMI cards installed and configured, with IPMI administrator account
Platform information available to the person running the installation.
Management Ensure baseboard management controller (BMC) interfaces are configured,
Interface and have an administration account username and password to provide when
(IPMI) prompted during installation.
1-2
Chapter 1
Server Configuration Checklist for Oracle Grid Infrastructure
Table 1-2 Operating System General Checklist for Oracle Grid Infrastructure on
Oracle Solaris
Item Task
Operating system • OpenSSH installed manually, if you do not have it installed already
general as part of a default Oracle Solaris installation
requirements • The following Oracle Solaris on SPARC (64-Bit) kernels are
supported:
Check Task
Disk space allocated to the At least 1 GB of space in the temporary disk space (/tmp)
temporary file system directory
Swap space allocation
relative to RAM Between 4 GB and 16 GB: Equal to the size of the RAM
More than 16 GB: 16 GB
Note: Configure swap for your expected system loads.
This installation guide provides minimum values for
installation only. Refer to your Oracle Solaris
documentation for additional memory tuning guidance.
1-3
Chapter 1
Network Checklist for Oracle Grid Infrastructure
Table 1-3 (Cont.) Server Configuration Checklist for Oracle Grid Infrastructure
Check Task
Mount point paths for the Oracle recommends that you create an Optimal Flexible
software binaries Architecture configuration as described in the appendix "Optimal
Flexible Architecture" in Oracle Grid Infrastructure Installation
and Upgrade Guide for your platform.
Ensure that the Oracle home The ASCII character restriction includes installation owner user
(the Oracle home path you names, which are used as a default for some home paths, as
select for Oracle Database) well as other directory names you may select for paths.
uses only ASCII characters
Set locale (if needed) Specify the language and the territory, or locale, in which you
want to use Oracle components. A locale is a linguistic and
cultural environment in which a system or program is running.
NLS (National Language Support) parameters determine the
locale-specific behavior on both servers and clients. The locale
setting of a component determines the language of the user
interface of the component, and the globalization behavior, such
as date and number formatting.
Set Network Time Protocol Oracle Clusterware requires the same time zone environment
for Cluster Time variable setting on all cluster nodes.
Synchronization Ensure that you set the time zone synchronization across all
cluster nodes using either an operating system configured
network time protocol (NTP) or Oracle Cluster Time
Synchronization Service.
Related Topics
• Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration
guidelines created to ensure well-organized Oracle installations, which simplifies
administration, support and maintenance.
1-4
Chapter 1
Network Checklist for Oracle Grid Infrastructure
Table 1-4 Network Configuration Tasks for Oracle Grid Infrastructure and
Oracle RAC
Check Task
Public network • Public network switch (redundant switches recommended)
hardware connected to a public gateway and to the public interface ports for
each cluster member node.
• Ethernet interface card (redundant network cards
recommended, trunked as one Ethernet port name).
• The switches and network interfaces must be at least 1 GbE.
• The network protocol is Transmission Control Protocol (TCP) and
Internet Protocol (IP).
Private network • Private dedicated network switches (redundant switches
hardware for the recommended), connected to the private interface ports for each
interconnect cluster member node.
Note: If you have more than one private network interface card for
each server, then Oracle Clusterware automatically associates
these interfaces for the private network using Grid Interprocess
Communication (GIPC) and Grid Infrastructure Redundant
Interconnect, also known as Cluster High Availability IP (HAIP).
• The switches and network interface adapters must be at least 1
GbE.
• The interconnect must support the user datagram protocol (UDP).
• Jumbo Frames (Ethernet frames greater than 1500 bits) are not
an IEEE standard, but can reduce UDP overhead if properly
configured. Oracle recommends the use of Jumbo Frames for
interconnects. However, be aware that you must load-test your
system, and ensure that they are enabled throughout the stack.
Oracle Flex ASM Oracle Flex ASM can use either the same private networks as Oracle
Network Hardware Clusterware, or use its own dedicated private networks. Each network
can be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM.
Oracle ASM networks use the TCP protocol.
1-5
Chapter 1
Network Checklist for Oracle Grid Infrastructure
Table 1-4 (Cont.) Network Configuration Tasks for Oracle Grid Infrastructure
and Oracle RAC
Check Task
Cluster Names and Determine and configure the following names and addresses for the
Addresses cluster:
• Cluster name: Decide a name for the cluster, and be prepared to
enter it during installation. The cluster name should have the
following characteristics:
Globally unique across all hosts, even across different DNS
domains.
At least one character long and less than or equal to 15
characters long.
Consist of the same character set used for host names, in
accordance with RFC 1123: Hyphens (-), and single-byte
alphanumeric characters (a to z, A to Z, and 0 to 9). If you use
third-party vendor clusterware, then Oracle recommends that you
use the vendor cluster name.
• Grid Naming Service Virtual IP Address (GNS VIP): If you plan
to use GNS, then configure a GNS name and fixed address in
DNS for the GNS VIP, and configure a subdomain on your DNS
delegated to the GNS VIP for resolution of cluster addresses.
GNS domain delegation is mandatory with dynamic public
networks (DHCP, autoconfiguration).
• Single Client Access Name (SCAN) and addresses
Using Grid Naming Service Resolution: Do not configure SCAN
names and addresses in your DNS. SCAN names are managed
by GNS.
Using Manual Configuration and DNS resolution: Configure a
SCAN name to resolve to three addresses on the domain name
service (DNS).
1-6
Chapter 1
User Environment Configuration Checklist for Oracle Grid Infrastructure
Table 1-4 (Cont.) Network Configuration Tasks for Oracle Grid Infrastructure
and Oracle RAC
Check Task
Hub Node Public, If you are not using GNS, then configure the following for each Hub
Private and Virtual IP Node:
names and Addresses • Public node name and address, configured in the DNS and
in /etc/hosts (for example, node1.example.com, address
192.0.2.10). The public node name should be the primary host
name of each node, which is the name displayed by the
hostname command.
• Private node address, configured on the private interface for
each node.
The private subnet that the private interfaces use must connect all
the nodes you intend to have as cluster members. Oracle
recommends that the network you select for the private network
uses an address range defined as private by RFC 1918.
• Public node virtual IP name and address (for example, node1-
vip.example.com, address 192.0.2.11).
If you are not using dynamic networks with GNS and subdomain
delegation, then determine a virtual host name for each node. A
virtual host name is a public node name that is used to reroute
client requests sent to the node if the node is down. Oracle
Database uses VIPs for client-to-database connections, so the
VIP address must be publicly accessible. Oracle recommends that
you provide a name in the format hostname-vip. For example:
myclstr2-vip.
If you are using GNS, then you can also configure Leaf Nodes on both
public and private networks, during installation. Leaf Nodes on public
networks do not use Oracle Clusterware services such as the public
network resources and VIPs, or run listeners. After installation, you can
configure network resources and listeners for the Leaf Nodes using
SRVCTL commands.
1-7
Chapter 1
User Environment Configuration Checklist for Oracle Grid Infrastructure
Check Task
Review Oracle Inventory The Oracle Inventory directory is the central inventory of
(oraInventory) and OINSTALL Oracle software installed on your system. It should be the
Group Requirements primary group for all Oracle software installation owners. Users
who have the Oracle Inventory group as their primary group
are granted the OINSTALL privilege to read and write to the
central inventory.
• If you have an existing installation, then OUI detects the
existing oraInventory directory from the /etc/
oraInst.loc file, and uses this location.
• If you are installing Oracle software for the first time, then
OUI creates an Oracle base and central inventory, and
creates an Oracle inventory using information in the
following priority:
– In the path indicated in the ORACLE_BASE
environment variable set for the installation owner
user account.
– In an Optimal Flexible Architecture (OFA) path (u[01–
99]/app/owner where owner is the name of the user
account running the installation), if that user account
has permissions to write to that path.
– In the user home directory, in the path /app/owner,
where owner is the name of the user account running
the installation.
Ensure that the group designated as the OINSTALL group is
available as the primary group for all planned Oracle software
installation owners.
Create operating system Create operating system groups and users depending on your
groups and users for standard security requirements, as described in this installation guide.
or role-allocated system Set resource limits settings and other requirements for Oracle
privileges software installation owners.
Group and user names must use only ASCII characters.
Note:
Do not delete an existing
daemon user. If a daemon user
has been deleted, then you must
add it back.
Unset Oracle Software If you have an existing Oracle software installation, and you
Environment Variables are using the same user to install this installation, then unset
the following environment
variables: $ORACLE_HOME; $ORA_NLS10; $TNS_ADMIN.
If you have set $ORA_CRS_HOME as an environment variable,
then unset it before starting an installation or upgrade. Do not
use $ORA_CRS_HOME as a user environment variable, except
as directed by Oracle Support.
1-8
Chapter 1
Storage Checklist for Oracle Grid Infrastructure
Check Task
Configure the Oracle Software Configure the environment of the oracle or grid user by
Owner Environment performing the following tasks:
• Set the default file mode creation mask (umask) to 022 in
the shell startup file.
• Set the DISPLAY environment variable.
Determine root privilege During installation, you are asked to run configuration scripts
delegation option for as the root user. You can either run these scripts manually as
installation root when prompted, or during installation you can provide
configuration information and passwords using a root privilege
delegation option.
To run root scripts automatically, select Automatically run
configuration scripts. during installation. To use the
automatic configuration option, the root user credentials for all
cluster member nodes must use the same password.
• Use root user credentials
Provide the superuser password for cluster member node
servers.
• Use sudo
sudo is a UNIX and Linux utility that allows members of
the sudoers list privileges to run individual commands as
root. Provide the user name and password of an operating
system user that is a member of sudoers, and is
authorized to run sudo on each cluster member node.
To enable sudo, have a system administrator with the
appropriate privileges configure a user that is a member of
the sudoers list, and provide the user name and
password when prompted during installation.
Check Task
Minimum disk space • At least 12 GB of space for the Oracle Grid Infrastructure
(local or shared) for for a cluster home (Grid home). Oracle recommends that
Oracle Grid you allocate 100 GB to allow additional space for patches.
Infrastructure At least 10 GB for Oracle Database Enterprise Edition
Software
• Allocate additional storage space as per your cluster
configuration, as described in Oracle Clusterware Storage Space
Requirements.
1-9
Chapter 1
Storage Checklist for Oracle Grid Infrastructure
Check Task
Select Oracle ASM During installation, based on the cluster configuration, you are asked
Storage Options to provide Oracle ASM storage paths for the Oracle Clusterware files.
These path locations must be writable by the Oracle Grid Infrastructure
installation owner (Grid user). These locations must be shared across
all nodes of the cluster on Oracle ASM because the files in the Oracle
ASM disk group created during installation must be available to all
cluster member nodes.
• For Oracle Standalone Cluster deployment, shared storage, either
Oracle ASM or Oracle ASM on NFS, is locally mounted on each of
the Hub Nodes.
• For Oracle Domain Services Cluster deployment, Oracle
ASM storage is shared across all nodes, and is available to
Oracle Member Clusters.
Related Topics
• Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum
disk space requirements based on the redundancy type, for installing Oracle
Clusterware files, and installing the starter database, for various Oracle Cluster
deployments.
1-10
Chapter 1
Cluster Deployment Checklist for Oracle Grid Infrastructure
Check Task
Configure an Oracle Deploy an Oracle Standalone Cluster.
Cluster that hosts all Use the Oracle Extended Cluster option to extend an Oracle RAC
Oracle Grid cluster across two, or more, separate sites, each equipped with its own
Infrastructure services storage.
and Oracle ASM
locally and accesses
storage directly
Configure an Oracle Deploy an Oracle Domain Services Cluster.
Cluster Domain to To run Oracle Real Application Clusters (Oracle RAC) or Oracle RAC
standardize, One Node database instances, deploy Oracle Member Cluster for
centralize, and Oracle Databases.
optimize your Oracle
To run highly-available software applications, deploy Oracle Member
Real Application
Cluster for Applications.
Clusters (Oracle RAC)
deployment
Table 1-8 Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check Task
Read the Release Notes Review release notes for your platform, which are available for
your release at the following URL:
http://www.oracle.com/technetwork/indexes/documentation/
index.html
Review the Licensing You are permitted to use only those components in the Oracle
Information Database media pack for which you have purchased licenses.
For more information, see:
Oracle Database Licensing Information User Manual
Run OUI with CVU and use Oracle Universal Installer is fully integrated with Cluster
fixup scripts Verification Utility (CVU), automating many CVU prerequisite
checks. Oracle Universal Installer runs all prerequisite checks
and creates fixup scripts when you run the installer. You can
run OUI up to the Summary screen without starting the
installation.
You can also run CVU commands manually to check system
readiness. For more information, see:
Oracle Clusterware Administration and Deployment Guide
1-11
Chapter 1
Installer Planning Checklist for Oracle Grid Infrastructure
Table 1-8 (Cont.) Oracle Universal Installer Checklist for Oracle Grid
Infrastructure Installation
Check Task
Download and run Oracle The Oracle ORAchk utility provides system checks that can
ORAchk for runtime and help to prevent issues after installation. These checks include
upgrade checks, or runtime kernel requirements, operating system resource allocations,
health checks and other system requirements.
Use the Oracle ORAchk Upgrade Readiness Assessment to
obtain an automated upgrade-specific system health check for
upgrades. For example:
./orachk -u -o pre
The Oracle ORAchk Upgrade Readiness Assessment
automates many of the manual pre- and post-upgrade checks
described in Oracle upgrade documentation.
Oracle ORAchk is supported on Windows platforms in a
Cygwin environment only. For more information, see:
https://support.oracle.com/rs?type=doc&id=1268927.1
Ensure cron jobs do not run If the installer is running when daily cron jobs start, then you
during installation may encounter unexplained installation problems if your cron
job is performing cleanup, and temporary files are deleted
before the installation is finished. Oracle recommends that you
complete installation before daily cron jobs are run, or
disable daily cron jobs that perform cleanup until after the
installation is completed.
Obtain Your My Oracle During installation, you require a My Oracle Support user
Support account information name and password to configure security updates, download
software updates, and other installation tasks. You can register
for My Oracle Support at the following URL:
https://support.oracle.com/
Check running Oracle • On a node with a standalone database not using Oracle
processes, and shut down ASM: You do not need to shut down the database while
processes if necessary you install Oracle Grid Infrastructure.
• On a node with a standalone Oracle Database using
Oracle ASM: Stop the existing Oracle ASM instances. The
Oracle ASM instances are restarted during installation.
• On an Oracle RAC Database node: This installation
requires an upgrade of Oracle Clusterware, as Oracle
Clusterware is required to run Oracle RAC. As part of the
upgrade, you must shut down the database one node at a
time as the rolling upgrade proceeds from node to node.
1-12
2
Checking and Configuring Server
Hardware for Oracle Grid Infrastructure
Verify that servers where you install Oracle Grid Infrastructure meet the minimum
requirements for installation.
This section provides minimum server requirements to complete installation of Oracle
Grid Infrastructure. It does not provide system resource guidelines, or other tuning
guidelines for particular workloads.
• Logging In to a Remote System Using X Window System
Use this procedure to run Oracle Universal Installer (OUI) by logging on to a
remote system where the runtime setting prohibits logging in directly to a graphical
user interface (GUI).
• Checking Server Hardware and Memory Configuration
Use this procedure to gather information about your server configuration.
Note:
If you log in as another user (for example, oracle or grid), then repeat this
procedure for that user as well.
1. Start an X Window System session. If you are using an X Window System terminal
emulator from a PC or similar system, then you may need to configure security
settings to permit remote hosts to display X applications on your local system.
2. Enter a command using the following syntax to enable remote hosts to display X
applications on the local X server:
# xhost + RemoteHost
# xhost + somehost.example.com
somehost.example.com being added to the access control list
2-1
Chapter 2
Checking Server Hardware and Memory Configuration
3. If you are not installing the software on the local system, then use the ssh
command to connect to the system where you want to install the software:
# ssh -Y RemoteHost
RemoteHost is the fully qualified remote host name. The -Y flag ("yes") enables
remote X11 clients to have full access to the original X11 display. For example:
# ssh -Y somehost.example.com
4. If you are not logged in as the root user, and you are performing configuration
steps that require root user privileges, then switch the user to root.
Note:
For more information about remote login using X Window System, refer to
your X server documentation, or contact your X server vendor or system
administrator. Depending on the X server software that you are using, you
may have to complete the tasks in a different order.
# sar -r n i
For example:
# sar -r 2 10
If the size of the physical RAM installed in the system is less than the required
size, then you must install more memory before continuing.
2. Determine the swap space usage and size of the configured swap space:
# /usr/sbin/swap -s
If necessary, see your operating system documentation for information about how
to configure additional swap space.
3. Determine the amount of space available in the /tmp directory:
# df -kh /tmp
If the free space available in the /tmp directory is less than what is required, then
complete one of the following steps:
2-2
Chapter 2
Checking Server Hardware and Memory Configuration
• Delete unnecessary files from the /tmp directory to meet the disk space
requirement.
• When you set the Oracle user's environment, also set the TMP and TMPDIR
environment variables to the directory you want to use instead of /tmp.
4. Determine the amount of free disk swap space on the system:
# df -kh
# /bin/isainfo -kv
If you do not see the expected output, then you cannot install the software on this
system.
2-3
3
Automatically Configuring Oracle Solaris
with Oracle Database Prerequisites
Packages
Use the Oracle Database prerequisites group package to simplify Oracle Solaris
operating system configuration in preparation for Oracle software installations.
Oracle recommends that you install the Oracle Database prerequisites group package
oracle-rdbms-server-18c-preinstall in preparation for Oracle Database and Oracle
Grid Infrastructure installations.
• About the Oracle Database Prerequisites Packages for Oracle Solaris
Use the Oracle Database prerequisites group package to simplify operating
system configuration and to ensure that you have the required packages.
• Checking the Oracle Database Prerequisites Packages Installation
Use this procedure to gather information about the Oracle Database prerequisites
group package configuration.
• Installing the Oracle Database Prerequisites Packages for Oracle Solaris
Use this procedure to install the Oracle Database prerequisites group package for
your Oracle software.
Configuring a server using Oracle Solaris and the Oracle Database prerequisites
group package consists of the following steps:
1. Install the recommended Oracle Solaris version for Oracle Database.
2. Install the Oracle Database prerequisites group package oracle-rdbms-
server-18c-preinstall.
3-1
Chapter 3
Checking the Oracle Database Prerequisites Packages Installation
Note:
Use the -n option to check for installation errors. If -n does not
display any errors, then omit the -n option when you install oracle-
rdbms-server-18c-preinstall.
b. If there are no errors, then log in as root, and install the group package:
TYPE FMRI
group x11/diagnostic/x11-info-clients
3-2
Chapter 3
Installing the Oracle Database Prerequisites Packages for Oracle Solaris
group x11/library/libxi
group x11/library/libxtst
group x11/session/xauth
require compress/unzip
require developer/assembler
require developer/build/make
Related Topics
• Adding and Updating Software in Oracle Solaris
Note:
• You do not have to specify the entire package name, only the trailing
portion of the name that is unique. See pkg(5).
• Oracle recommends that you install the solaris-minimal-server group
package and then install oracle-rdbms-server-18c-preinstall.
Related Topics
• Oracle Solaris Documentation
3-3
4
Configuring Oracle Solaris Operating
Systems for Oracle Grid Infrastructure
Complete operating system configuration requirements and checks for Oracle Solaris
operating systems before you start installation.
• Guidelines for Oracle Solaris Operating System Installation
Decide how you want to install Oracle Solaris.
• Reviewing Operating System and Software Upgrade Best Practices
These topics provide general planning guidelines and platform-specific information
about upgrades and migration.
• Reviewing Operating System Security Common Practices
Secure operating systems are an important basis for general system security.
• About Installation Fixup Scripts
Oracle Universal Installer detects when the minimum requirements for an
installation are not met, and creates shell scripts, called fixup scripts, to finish
incomplete system configuration steps.
• About Operating System Requirements
Depending on the products that you intend to install, verify that you have the
required operating system kernel and packages installed.
• Operating System Requirements for Oracle Solaris on SPARC (64-Bit)
The kernels and packages listed in this section are supported for this release on
SPARC 64-bit systems for Oracle Database and Oracle Grid Infrastructure 12c.
• Operating System Requirements for Oracle Solaris on x86–64 (64-Bit)
The kernels and packages listed in this section are supported for this release on
x86–64 (64-bit) systems for Oracle Database and Oracle Grid Infrastructure 12c.
• Additional Drivers and Software Packages for Oracle Solaris
Information about optional drivers and software packages.
• Checking the Software Requirements for Oracle Solaris
Check the software requirements of your Oracle Solaris operating system to see if
they meet minimum requirements for installation.
• About Oracle Solaris Cluster Configuration on SPARC
Review the following information if you are installing Oracle Grid Infrastructure on
SPARC processor servers.
• Running the rootpre.sh Script on x86 with Oracle Solaris Cluster
On x86 (64-bit) platforms running Oracle Solaris, if you install Oracle Solaris
Cluster in addition to Oracle Clusterware, then complete the following task.
• Enabling the Name Service Cache Daemon
To allow Oracle Clusterware to better tolerate network failures with NAS devices
or NFS mounts, enable the Name Service Cache Daemon (nscd).
• Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on
all cluster nodes.
4-1
Chapter 4
Guidelines for Oracle Solaris Operating System Installation
Caution:
Always create a backup of existing databases before starting any
configuration change.
4-2
Chapter 4
Reviewing Operating System and Software Upgrade Best Practices
Refer to Oracle Database Upgrade Guide for more information about required
software updates, pre-upgrade tasks, post-upgrade tasks, compatibility, and
interoperability between different releases.
Related Topics
• Oracle Database Upgrade Guide
Note:
Confirm that the server operating system is supported, and that kernel and
package requirements for the operating system meet or exceed the minimum
requirements for the Oracle Database release to which you want to migrate.
Manual, Command-Line Copy for Migrating Data and Upgrading Oracle Database
You can copy files to the new server and upgrade it manually. If you use this
procedure, then you cannot use Oracle Database Upgrade Assistant. However, you
can revert to your existing database if you encounter upgrade issues.
1. Copy the database files from the computer running the previous operating system
to the one running the new operating system.
2. Re-create the control files on the computer running the new operating system.
3. Manually upgrade the database using command-line scripts and utilities.
See Also:
Oracle Database Upgrade Guide to review the procedure for upgrading the
database manually, and to evaluate the risks and benefits of this option
4-3
Chapter 4
Reviewing Operating System Security Common Practices
See Also:
Oracle Database Upgrade Guide to review the Export/Import method for
migrating data and upgrading Oracle Database
4-4
Chapter 4
About Operating System Requirements
Note:
Using fixup scripts does not ensure that all the prerequisites for installing
Oracle Database are met. You must still verify that all the preinstallation
requirements are met to ensure a successful installation.
Oracle Universal Installer is fully integrated with Cluster Verification Utility (CVU)
automating many prerequisite checks for your Oracle Grid Infrastructure or Oracle
Real Application Clusters (Oracle RAC) installation. You can also manually perform
various CVU verifications by running the cluvfy command.
Related Topics
• Oracle Clusterware Administration and Deployment Guide
Note:
Oracle does not support running different operating system versions on
cluster members, unless an operating system is being upgraded. You cannot
run different operating system version binaries on members of the same
cluster, even if each operating system is supported.
4-5
Chapter 4
Operating System Requirements for Oracle Solaris on SPARC (64-Bit)
The platform-specific hardware and software requirements included in this guide were
current when this guide was published. However, because new platforms and
operating system software versions might be certified after this guide is published,
review the certification matrix on the My Oracle Support website for the most up-to-
date list of certified hardware platforms and operating system versions:
https://support.oracle.com/
Identify the requirements for your Oracle Solaris on SPARC (64–bit) system, and
ensure that you have a supported kernel and required packages installed before
starting installation.
• Supported Oracle Solaris 11 Releases for SPARC (64-Bit)
Check the supported Oracle Solaris 11 distributions and other operating system
requirements.
• Supported Oracle Solaris 10 Releases for SPARC (64-Bit)
Check the supported Oracle Solaris 10 distributions and other operating system
requirements.
Related Topics
• Additional Drivers and Software Packages for Oracle Solaris
Information about optional drivers and software packages.
Table 4-1 Oracle Solaris 11 Releases for SPARC (64-Bit) Minimum Operating
System Requirements
Item Requirements
SSH Requirement Secure Shell is configured at installation for Oracle Solaris.
Oracle Solaris 11 Oracle Solaris 11.4 (Oracle Solaris 11.4.0.0.1.15.0) or later SRUs
operating system Oracle Solaris 11.3 SRU 7.6 (Oracle Solaris 11.3.7.6.0) or later SRUs
Oracle Solaris 11.2 SRU 5.5 (Oracle Solaris 11.2.5.5.0) or later SRUs
4-6
Chapter 4
Operating System Requirements for Oracle Solaris on SPARC (64-Bit)
Table 4-1 (Cont.) Oracle Solaris 11 Releases for SPARC (64-Bit) Minimum
Operating System Requirements
Item Requirements
Packages for Oracle The following packages must be installed:
Solaris 11
pkg://solaris/system/library/openmp
pkg://solaris/compress/unzip
pkg://solaris/developer/assembler
pkg://solaris/developer/build/make
pkg://solaris/system/dtrace
pkg://solaris/system/header
pkg://solaris/system/kernel/oracka (Only for Oracle Real
Application Clusters installations)
pkg://solaris/system/library
pkg://solaris/system/linker
pkg://solaris/system/xopen/xcu4 (If not already installed as part
of standard Oracle Solaris 11 installation)
pkg://solaris/x11/diagnostic/x11-info-clients
Note: Starting with Oracle Solaris 11.2, if you have performed a
standard Oracle Solaris 11 installation, and installed the Oracle
Database prerequisites group package oracle-rdbms-server-18c-
preinstall, then you do not have to install these packages, as
oracle-rdbms-server-18c-preinstall installs them for you.
Table 4-2 Oracle Solaris 11 Releases for SPARC (64-Bit) Minimum Operating
System Requirements for Oracle Solaris Cluster
Item Requirements
Oracle Solaris Cluster
for Oracle Solaris 11 Oracle Solaris Cluster 4.0
Note: This
requirement is
optional and applies
only if using Oracle
Solaris Cluster.
Table 4-3 Oracle Solaris 10 Releases for SPARC (64-Bit) Minimum Operating
System Requirements
Item Requirements
SSH Requirement Secure Shell is configured at installation for Oracle Solaris.
Oracle Solaris 10 Oracle Solaris 10 Update 11 (Oracle Solaris 10 1/13
operating system s10s_u11wos_24a) or later updates
4-7
Chapter 4
Operating System Requirements for Oracle Solaris on x86–64 (64-Bit)
Table 4-3 (Cont.) Oracle Solaris 10 Releases for SPARC (64-Bit) Minimum
Operating System Requirements
Item Requirements
Packages for Oracle The following packages and patches (or later versions) must be
Solaris 10 installed:
SUNWdtrc
SUNWeu8os
SUNWi1cs (ISO8859-1)
SUNWi15cs (ISO8859-15)
118683-13
119963-33
120753-14
147440-25
Note: You may also require additional font packages for Java,
depending on your locale. Refer to the following URL:
http://www.oracle.com/technetwork/java/javase/solaris-font-
requirements-142758.html
Table 4-4 Oracle Solaris 10 Releases for SPARC (64-Bit) Minimum Operating
System Requirements for Oracle Solaris Cluster
Item Requirements
Oracle Solaris Cluster Oracle Solaris Cluster 3.2 Update 2
for Oracle Solaris 10
Note: This
requirement is
optional and applies
only if using Oracle
Solaris Cluster.
Patches for Oracle The following patches (or later versions) must be installed:
Solaris 10
125508-08
125514-05
125992-04
126047-11
126095-05
126106-33
udlm 3.3.4.10
QFS 4.6
4-8
Chapter 4
Operating System Requirements for Oracle Solaris on x86–64 (64-Bit)
operating system software versions might be certified after this guide is published,
review the certification matrix on the My Oracle Support website for the most up-to-
date list of certified hardware platforms and operating system versions:
https://support.oracle.com/
Identify the requirements for your Oracle Solaris on x86–64 (64–bit) system, and
ensure that you have a supported kernel and required packages installed before
starting installation.
• Supported Oracle Solaris 11 Releases for x86-64 (64-Bit)
Check the supported Oracle Solaris 11 distributions and other operating system
requirements.
• Supported Oracle Solaris 10 Releases for x86-64 (64-Bit)
Check the supported Oracle Solaris 10 distributions and other operating system
requirements.
Table 4-5 Oracle Solaris 11 Releases for x86-64 (64-Bit) Minimum Operating
System Requirements
Item Requirements
SSH Requirement Secure Shell is configured at installation for Oracle Solaris.
Oracle Solaris 11 Oracle Solaris 11.4 (Oracle Solaris 11.4.0.0.1.15.0) or later SRUs
operating system Oracle Solaris 11.3 SRU 7.6 (Oracle Solaris 11.3.7.6.0) or later SRUs
Oracle Solaris 11.2 SRU 5.5 (Oracle Solaris 11.2.5.5.0) or later SRUs
Packages for Oracle The following packages must be installed:
Solaris 11
pkg://solaris/system/library/openmp
pkg://solaris/compress/unzip
pkg://solaris/developer/assembler
pkg://solaris/developer/build/make
pkg://solaris/system/dtrace
pkg://solaris/system/header
pkg://solaris/system/kernel/oracka (Only for Oracle Real
Application Clusters installations)
pkg://solaris/system/library
pkg://solaris/system/linker
pkg://solaris/system/xopen/xcu4 (If not already installed as part
of standard Oracle Solaris 11 installation)
pkg://solaris/x11/diagnostic/x11-info-clients
Note:Starting with Oracle Solaris 11.2, if you have performed a
standard Oracle Solaris 11 installation, and installed the Oracle
Database prerequisites group package oracle-rdbms-server-18c-
preinstall, then you do not have to install these packages, as
oracle-rdbms-server-18c-preinstall installs them for you.
4-9
Chapter 4
Operating System Requirements for Oracle Solaris on x86–64 (64-Bit)
Table 4-6 Oracle Solaris 11 Releases for x86-64 (64-Bit) Minimum Operating
System Requirements for Oracle Solaris Cluster
Item Requirements
Oracle Clusterware for
Oracle Solaris 11 Oracle Solaris Cluster 4.0
Note: This
requirement is
optional and applies
only if using Oracle
Solaris Cluster.
Table 4-7 Oracle Solaris 10 Releases for x86-64 (64-Bit) Minimum Operating
System Requirements
Item Requirements
SSH Requirement Secure Shell is configured at installation for Oracle Solaris.
Oracle Solaris 10 Oracle Solaris 10 Update 11 (Oracle Solaris 10 1/13
operating system s10x_u11wos_24a) or later updates
Packages for Oracle The following packages (or later versions) must be installed:
Solaris 10
SUNWdtrc
SUNWeu8os
SUNWi1cs (ISO8859-1)
SUNWi15cs (ISO8859-15)
119961-12
119961-14
119964-33
120754-14
147441-25
148889-02
Note: You may also require additional font packages for Java,
depending on your locale. Refer to the following URL:
http://www.oracle.com/technetwork/java/javase/solaris-font-
requirements-142758.html
4-10
Chapter 4
Additional Drivers and Software Packages for Oracle Solaris
Table 4-8 Oracle Solaris 10 Releases for x86-64 (64-Bit) Minimum Operating
System Requirements for Oracle Solaris Cluster
Item Requirements
Oracle Solaris Cluster Oracle Solaris Cluster 3.2 Update 2
for Oracle Solaris 10
Note: This
requirement is
optional and applies
only if using Oracle
Solaris Cluster.
Patches for Oracle The following patches (or later versions) must be installed:
Solaris 10
125509-10
125515-05
125993-04
126048-11
126096-04
126107-33
QFS 4.6
4-11
Chapter 4
Additional Drivers and Software Packages for Oracle Solaris
Note:
Oracle Messaging Gateway does not support the integration of Advanced
Queuing with TIBCO Rendezvous on IBM: Linux on System z.
Related Topics
• Oracle Database Advanced Queuing User's Guide
unixODBC-2.3.4 or later
4-12
Chapter 4
Additional Drivers and Software Packages for Oracle Solaris
Note:
Starting with Oracle
Database 12c Release 2
(12.2), JDK 8 (32-bit) is not
supported on Oracle Solaris.
Features that use Java (32-
bit) are not available on
Oracle Solaris.
Oracle C++ Oracle Solaris Studio 12.4 (formerly Sun Studio) PSE
Oracle C++ Call Interface 4/15/2015
Pro*C/C++ 124863-12 C++ 5.9 compiler
Oracle XML Developer's Kit 124864-12 C++ 5.9 Compiler
(XDK) Download Oracle Solaris Studio from the following URL:
http://www.oracle.com/technetwork/server-storage/
developerstudio/overview/index.html
4-13
Chapter 4
Checking the Software Requirements for Oracle Solaris
Note:
Additional patches may be needed depending on applications you deploy.
4-14
Chapter 4
Checking the Software Requirements for Oracle Solaris
$ uname -r
5.11
In this example, the version shown is Oracle Solaris 11 (5.11). If necessary, refer
to your operating system documentation for information about upgrading the
operating system.
2. To determine the release level:
$ cat /etc/release
In this example, the release level shown is Oracle Solaris 11.1 SPARC.
3. To determine detailed information about the operating system version such as
update level, SRU, and build:
a. On Oracle Solaris 10
$ /usr/bin/pkginfo -l SUNWsolnm
b. On Oracle Solaris 11
4-15
Chapter 4
Checking the Software Requirements for Oracle Solaris
Note:
There may be more recent versions of packages listed installed on the
system. If a listed patch is not installed, then determine if a more recent
version is installed before installing the version listed. Refer to your operating
system documentation for information about installing packages.
Related Topics
• The Adding and Updating Oracle Solaris Software Packages guide
• Oracle Solaris 11 Product Documentation
• My Oracle Support note 1021281.1
4-16
Chapter 4
About Oracle Solaris Cluster Configuration on SPARC
Note:
• Your system may have more recent versions of the listed patches
installed. If a listed patch is not installed, then determine if a more recent
version is installed before installing the version listed.
• If an operating system patch is not installed, then download and install it
from My Oracle Support.
4-17
Chapter 4
Running the rootpre.sh Script on x86 with Oracle Solaris Cluster
$ su - root
2. Complete one of the following steps, depending on the location of the installation:
• If the installation files are on an installation media, then enter a command
similar to the following, where mountpoint is the disk mount point directory or
the path of the database directory on the installation media:
# mountpoint/grid/rootpre.sh
• If the installation files are on the hard disk, change directory to the directory /
Disk1 and enter the following command:
# ./rootpre.sh
# exit
Starting with Oracle Solaris 11, when you enable nscd, nscd performs all name service
lookups. Before this release, nscd cached a small subset of lookups. By default, nscd
is started during system startup in runlevel 3, which is a multiuser state with NFS
resources shared. To check to see if nscd is running, enter the following Service
Management Facility (SMF) command:
# svcs name-service-cache
STATE STIME FMRI
online Oct_15 svc:/network/nfs/status:default
online Oct_30 svc:/system/name-service-cache:default
4-18
Chapter 4
Setting Network Time Protocol for Cluster Time Synchronization
If the nscd service is not online, you can enable it using the following command:
Note:
Before starting the installation of Oracle Grid Infrastructure, Oracle
recommends that you ensure the clocks on all nodes are set to the same
time.
If you have NTP daemons on your server but you cannot configure them to
synchronize time with a time server, and you want to use Cluster Time
Synchronization Service to provide synchronization service in the cluster, then
deactivate and deinstall the NTP.
To disable the NTP service, run the following command as the root user:
# /usr/sbin/svcadm disable ntp
4-19
Chapter 4
Using Automatic SSH Configuration During Installation
When the installer finds that the NTP protocol is not active, the Cluster Time
Synchronization Service is installed in active mode and synchronizes the time across
the nodes. If NTP is found configured, then the Cluster Time Synchronization Service
is started in observer mode, and no active time synchronization is performed by Oracle
Clusterware within the cluster.
To confirm that ctssd is active after installation, enter the following command as the
Grid installation owner:
$ crsctl check ctss
If you are using NTP, and you prefer to continue using it instead of Cluster Time
Synchronization Service, then you need to modify the NTP configuration. Restart the
network time protocol daemon after you complete this task.
You can modify the NTP configuration as in the following examples:
• On Oracle Solaris 10
Edit the /etc/inet/ntp.conf file to add the following parameters:
slewalways yes
disable pll
To enable NTP after it has been disabled, enter the following command:
# /usr/sbin/svcadm enable ntp
Related Topics
• Oracle Solaris Cluster 3.3 Documentation
• Oracle Solaris Cluster 4 Documentation
4-20
Chapter 4
Using Automatic SSH Configuration During Installation
Note:
Oracle configuration assistants use SSH for configuration operations from
local to remote nodes. Oracle Enterprise Manager also uses SSH. RSH is no
longer supported.
You can configure SSH from the OUI interface during installation for the user account
running the installation. The automatic configuration creates passwordless SSH
connectivity between all cluster member nodes. Oracle recommends that you use the
automatic procedure if possible.
To enable the script to run, you must remove stty commands from the profiles of any
existing Oracle software installation owners you want to use, and remove other
security measures that are triggered during a login, and that generate messages to the
terminal. These messages, mail checks, and other displays prevent Oracle software
installation owners from using the SSH configuration script that is built into OUI. If they
are not disabled, then SSH must be configured manually before an installation can be
run.
In rare cases, Oracle Clusterware installation can fail during the "AttachHome"
operation when the remote node closes the SSH connection. To avoid this problem,
set the timeout wait to unlimited by setting the following parameter in the SSH daemon
configuration file /etc/ssh/sshd_config on all cluster nodes:
LoginGraceTime 0
4-21
5
Configuring Networks for Oracle Grid
Infrastructure and Oracle RAC
Check that you have the networking hardware and internet protocol (IP) addresses
required for an Oracle Grid Infrastructure for a cluster installation.
• About Oracle Grid Infrastructure Network Configuration Options
Ensure that you have the networking hardware and internet protocol (IP)
addresses required for an Oracle Grid Infrastructure for a cluster installation.
• Understanding Network Addresses
Identify each interface as a public or private interface, or as an interface that you
do not want Oracle Grid Infrastructure or Oracle Flex ASM cluster to use.
• Network Interface Hardware Minimum Requirements
Review these requirements to ensure that you have the minimum network
hardware technology for Oracle Grid Infrastructure clusters.
• Private IP Interface Configuration Requirements
Requirements for private interfaces depend on whether you are using single or
multiple Interfaces.
• IPv4 and IPv6 Protocol Requirements
Oracle Grid Infrastructure and Oracle RAC support the standard IPv6 address
notations specified by RFC 2732 and global and site-local IPv6 addresses as
defined by RFC 4193.
• Oracle Grid Infrastructure IP Name and Address Requirements
Review this information for Oracle Grid Infrastructure IP Name and Address
requirements.
• Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Broadcast communications (ARP and UDP) must work properly across all the
public and private interfaces configured for use by Oracle Grid Infrastructure.
• Multicast Requirements for Networks Used by Oracle Grid Infrastructure
For each cluster member node, the Oracle mDNS daemon uses multicasting on all
interfaces to communicate with other nodes in the cluster.
• Domain Delegation to Grid Naming Service
If you are configuring Grid Naming Service (GNS) for a standard cluster, then
before installing Oracle Grid Infrastructure you must configure DNS to send to
GNS any name resolution requests for the subdomain served by GNS.
• Configuration Requirements for Oracle Flex Clusters
Understand Oracle Flex Clusters and their configuration requirements.
• Grid Naming Service Cluster Configuration Example
Review this example to understand Grid Naming Service configuration.
• Manual IP Address Configuration Example
If you choose not to use GNS, then before installation you must configure public,
virtual, and private IP addresses.
5-1
Chapter 5
About Oracle Grid Infrastructure Network Configuration Options
5-2
Chapter 5
Understanding Network Addresses
5-3
Chapter 5
Understanding Network Addresses
you configure addresses using a DNS, then you should ensure that the private IP
addresses are reachable only by the cluster nodes.
You can choose multiple interconnects either during installation or postinstallation
using the oifcfg setif command.
After installation, if you modify the interconnect for Oracle Real Application Clusters
(Oracle RAC) with the CLUSTER_INTERCONNECTS initialization parameter, then you must
change the interconnect to a private IP address, on a subnet that is not used with a
public IP address, nor marked as a public subnet by oifcfg. Oracle does not support
changing the interconnect to an interface using a subnet that you have designated as
a public subnet.
You should not use a firewall on the network with the private network IP addresses,
because this can block interconnect traffic.
5-4
Chapter 5
Understanding Network Addresses
5-5
Chapter 5
Network Interface Hardware Minimum Requirements
that only one of these clusters runs SCAN listeners. The databases of all clusters use
the SCAN VIPs of this cluster, for all their database connections. Each cluster has its
own set of ports, instead of SCAN VIPs. Clusters using shared SCAN can name their
database services as desired, without naming conflicts if one or more of these clusters
are configured with services of the same name. Node VIP uses host IP address.
5-6
Chapter 5
Private IP Interface Configuration Requirements
protect against cluster outages and performance degradation due to common shared
Ethernet switch network events.
Storage Networks
Oracle Automatic Storage Management and Oracle Real Application Clusters require
network-attached storage.
Oracle Automatic Storage Management (Oracle ASM): The network interfaces used
for Oracle Clusterware files are also used for Oracle ASM.
Third-party storage: Oracle recommends that you configure additional interfaces for
storage.
5-7
Chapter 5
IPv4 and IPv6 Protocol Requirements
Note:
During installation, you can define up to four interfaces for the private
network. The number of HAIP addresses created during installation is based
on both physical and logical interfaces configured for the network adapter.
After installation, you can define additional interfaces. If you define more than
four interfaces as private network interfaces, then be aware that Oracle
Clusterware activates only four of the interfaces at a time. However, if one of
the four active interfaces fails, then Oracle Clusterware transitions the HAIP
addresses configured to the failed interface to one of the reserve interfaces
in the defined set of private interfaces.
Related Topics
• Oracle Clusterware Administration and Deployment Guide
5-8
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
Link-local and site-local IPv6 addresses as defined in RFC 1884 are not
supported.
5-9
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
5-10
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
Select your cluster name carefully. After installation, you can only change the
cluster name by reinstalling Oracle Grid Infrastructure.
5-11
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
5-12
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
• All of the node names in the GNS domain must be unique; address ranges and
cluster names must be unique for both GNS server and GNS client clusters.
• You must have a GNS client data file that you generated on the GNS server
cluster, so that the GNS client cluster has the information needed to delegate its
name resolution to the GNS server cluster, and you must have copied that file to
the GNS client cluster member node on which you are running the Oracle Grid
Infrastructure installation.
For example:
Copy the GNS Client data file to a secure path on the GNS Client node where you run
the GNS Client cluster installation. The Oracle installation user must have permissions
to access that file. Oracle recommends that no other user is granted permissions to
access the GNS Client data file. During installation, you are prompted to provide a
path to that file.
For example:
Related Topics
• Oracle Clusterware Administration and Deployment Guide
5-13
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
5-14
Chapter 5
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Note:
The SCAN and cluster name are entered in separate fields during
installation, so cluster name requirements do not apply to the SCAN name.
Oracle strongly recommends that you do not configure SCAN VIP addresses
in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file
to resolve SCANs, then the SCAN can resolve to one IP address only.
Configuring SCANs in a DNS or a hosts file is the only supported
configuration. Configuring SCANs in a Network Information Service (NIS) is
not supported.
Name: mycluster-scan.example.com
Address: 192.0.2.201
Name: mycluster-scan.example.com
Address: 192.0.2.202
Name: mycluster-scan.example.com
Address: 192.0.2.203
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
Oracle strongly recommends that you do not configure SCAN VIP addresses in the
hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve
SCANs, then the SCAN can resolve to one IP address only.
Configuring SCANs in a DNS or a hosts file is the only supported configuration.
Configuring SCANs in a Network Information Service (NIS) is not supported.
5-15
Chapter 5
Multicast Requirements for Networks Used by Oracle Grid Infrastructure
When configuring public and private network interfaces for Oracle RAC, you must
enable Address Resolution Protocol (ARP). Highly Available IP (HAIP) addresses do
not require ARP on the public network, but for VIP failover, you need to enable ARP.
Do not configure NOARP.
Oracle recommends that the subdomain name is distinct from your corporate domain.
For example, if your corporate domain is mycorp.example.com, the subdomain for
GNS might be rac-gns.mycorp.example.com.
5-16
Chapter 5
Domain Delegation to Grid Naming Service
If the subdomain is not distinct, then it should be for the exclusive use of GNS. For
example, if you delegate the subdomain mydomain.example.com to GNS, then there
should be no other domains that share it such as lab1.mydomain.example.com.
mycluster-gns-vip.example.com A 192.0.2.1
cluster01.example.com NS mycluster-gns-vip.example.com
3. When using GNS, you must configure resolve.conf on the nodes in the cluster
(or the file on your system that provides resolution information) to contain name
server entries that are resolvable to corporate DNS servers. The total timeout
period configured—a combination of options attempts (retries) and options timeout
(exponential backoff)—should be less than 30 seconds. For example, where
xxx.xxx.xxx.42 and xxx.xxx.xxx.15 are valid name server addresses in your
network, provide an entry similar to the following in /etc/resolv.conf:
options attempts: 2
options timeout: 1
5-17
Chapter 5
Configuration Requirements for Oracle Flex Clusters
SCAN address resolution. Oracle recommends that you place the NIS entry at the
end of the search list. For example:
/etc/nsswitch.conf
hosts: files dns nis
Be aware that use of NIS is a frequent source of problems when doing cable pull tests,
as host name and user name resolution can fail.
5-18
Chapter 5
Configuration Requirements for Oracle Flex Clusters
Hub Nodes in Oracle Flex Clusters are tightly connected, and have direct access to
shared storage. In an Oracle Flex Cluster configuration, Hub Nodes can also provide
storage service for one or more Leaf Nodes. Three Hub Nodes act as I/O Servers for
storage access requests from Leaf Nodes. If a Hub Node acting as an I/O Server
becomes unavailable, then Oracle Grid Infrastructure starts another I/O Server on
another Hub Node.
Leaf Nodes in Oracle Flex Clusters do not require direct access to shared storage, but
instead request data through Hub Nodes. Hub Nodes can run in an Oracle Flex
Cluster configuration without having any Leaf Nodes as cluster member nodes, but
Leaf Nodes must be members of a cluster with a pool of Hub Nodes.
Oracle RAC database instances running on Leaf Nodes are referred to as far Oracle
ASM client instances. Oracle ASM metadata is never sent to the far client database
instance. Instead, the far Oracle ASM client database sends the I/O requests to I/O
Server instances running on Hub Nodes over the Oracle Flex ASM network.
You configure servers for Hub Node and Leaf Node roles. You can designate servers
for manual or automatic configuration.
If you select manual configuration, then you must designate each node in your cluster
as a Hub Node or a Leaf Node. Each role requires different access to storage. To be
eligible for the Hub Node role, a server must have direct access to storage. To be
eligible for the Leaf Node role, a server may have access to direct storage, but it does
not require direct access, because leaf nodes access storage as clients through Hub
Nodes.
If you select automatic configuration of roles, then cluster nodes that have access to
storage and join are configured as Hub Nodes, up to the number that you designate as
your target. Other nodes that do not have access to storage or that join the cluster
after that target number is reached join the cluster as Leaf Nodes. Nodes are
configured as needed to provide Hub Nodes configured with Local or Near ASM to
provide storage client services, and Leaf Nodes that are configured with direct access
to Oracle ASM disks can be reconfigured as needed to become Hub Nodes. Oracle
recommends that you select automatic configuration of Hub and Leaf node roles.
5-19
Chapter 5
Configuration Requirements for Oracle Flex Clusters
support numerous database clients. Each Oracle Flex ASM cluster has its own name
that is globally unique within the enterprise.
You can consolidate all the storage requirements into a single set of disk groups. All
these disk groups are managed by a small set of Oracle ASM instances running in a
single Oracle Flex Cluster.
Every Oracle Flex ASM cluster has one or more Hub Nodes on which Oracle ASM
instances are running.
Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or
use its own dedicated private networks. Each network can be classified PUBLIC, ASM &
PRIVATE, PRIVATE, or ASM.
About Oracle Flex ASM Cluster with Oracle IOServer (IOS) Configuration
An Oracle IOServer instance provides Oracle ASM file access for Oracle Database
instances on nodes of Oracle Member Clusters that do not have connectivity to Oracle
ASM managed disks. IOS enables you to configure Oracle Member Clusters on such
nodes. On the storage cluster, the IOServer instance on each node opens up network
ports to which clients send their I/O. The IOServer instance receives data packets from
5-20
Chapter 5
Configuration Requirements for Oracle Flex Clusters
the client and performs the appropriate I/O to Oracle ASM disks similar to any other
database client. On the client side, databases can use direct NFS (dNFS) to
communicate with an IOServer instance. However, no client side configuration is
required to use IOServer, so you are not required to provide a server IP address or
any additional configuration information. On nodes and clusters that are configured to
access Oracle ASM files through IOServer, the discovery of the Oracle IOS instance
occurs automatically.
To install an Oracle Member Cluster, the administrator of the Oracle Domain Services
Cluster creates an Oracle Member Cluster using a crsctl command that creates a
Member Cluster Manifest file. During Oracle Grid Infrastructure installation, if you
choose to install an Oracle Member Cluster, then the installer prompts you for the
Member Cluster Manifest file. An attribute in the Member Cluster Manifest file specifies
if the Oracle Member Cluster is expected to access Oracle ASM files through an
IOServer instance.
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
5-21
Chapter 5
Grid Naming Service Cluster Configuration Example
Syntax
You can create manual addresses using alphanumeric strings.
Example 5-1 Examples of Manually-Assigned Addresses
mycloud001nd; mycloud046nd; mycloud046-vip; mycloud348nd; mycloud784-vip
5-22
Chapter 5
Grid Naming Service Cluster Configuration Example
As nodes are added to the cluster, your organization's DHCP server can provide
addresses for these nodes dynamically. These addresses are then registered
automatically in GNS, and GNS provides resolution within the subdomain to cluster
node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with
GNS, no further configuration is required. Oracle Clusterware provides dynamic
network configuration as nodes are added to or removed from the cluster. The
following example is provided only for information.
With IPv6 networks, the IPv6 auto configuration feature assigns IP addresses and no
DHCP server is required.
With a two node cluster where you have defined the GNS VIP, after installation you
might have a configuration similar to the following for a two-node cluster, where the
cluster name is mycluster, the GNS parent domain is gns.example.com, the
subdomain is cluster01.example.com, the 192.0.2 portion of the IP addresses
represents the cluster public IP address subdomain, and 192.168 represents the
private IP address subdomain:
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
GNS VIP None Selected by mycluster- virtual 192.0.2.1 Fixed by net DNS
Oracle gns- administrato
Clusterware vip.example. r
com
Node 1 Node 1 node1 node1 public 192.0.2.101 Fixed GNS
Public
Node 1 VIP Node 1 Selected by node1-vip virtual 192.0.2.104 DHCP GNS
Oracle
Clusterware
Node 1 Node 1 node1 node1-priv private 192.168.0.1 Fixed or GNS
Private DHCP
Node 2 Node 2 node2 node2 public 192.0.2.102 Fixed GNS
Public
Node 2 VIP Node 2 Selected by node2-vip virtual 192.0.2.105 DHCP GNS
Oracle
Clusterware
Node 2 Node 2 node2 node2-priv private 192.168.0.2 Fixed or GNS
Private DHCP
SCAN VIP 1 none Selected by mycluster- virtual 192.0.2.201 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01
.example.co
m
SCAN VIP 2 none Selected by mycluster- virtual 192.0.2.202 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01
.example.co
m
5-23
Chapter 5
Manual IP Address Configuration Example
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
SCAN VIP 3 none Selected by mycluster- virtual 192.0.2.203 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01
.example.co
m
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
Node 1 Node 1 node1 node1 public 192.0.2.101 Fixed DNS
Public
Node 1 VIP Node 1 Selected by node1-vip virtual 192.0.2.104 Fixed DNS and
Oracle hosts file
Clusterware
Node 1 Node 1 node1 node1-priv private 192.168.0.1 Fixed DNS and
Private hosts file, or
none
Node 2 Node 2 node2 node2 public 192.0.2.102 Fixed DNS
Public
Node 2 VIP Node 2 Selected by node2-vip virtual 192.0.2.105 Fixed DNS and
Oracle hosts file
Clusterware
Node 2 Node 2 node2 node2-priv private 192.168.0.2 Fixed DNS and
Private hosts file, or
none
SCAN VIP 1 none Selected by mycluster- virtual 192.0.2.201 Fixed DNS
Oracle scan
Clusterware
5-24
Chapter 5
Network Interface Configuration Options
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
SCAN VIP 2 none Selected by mycluster- virtual 192.0.2.202 Fixed DNS
Oracle scan
Clusterware
SCAN VIP 3 none Selected by mycluster- virtual 192.0.2.203 Fixed DNS
Oracle scan
Clusterware
You do not need to provide a private name for the interconnect. If you want name
resolution for the interconnect, then you can configure private IP names in the hosts
file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the
interface defined during installation as the private interface (eth1, for example), and to
the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so
they are not fixed to a particular node. To enable VIP failover, the configuration shown
in the preceding table defines the SCAN addresses and the public and VIP addresses
of both nodes on the same subnet, 192.0.2.
Note:
All host names must conform to the RFC–952 standard, which permits
alphanumeric characters, but does not allow underscores ("_").
5-25
Chapter 5
Network Interface Configuration Options
interface for NAS I/O. Failing to provide three separate interfaces in this case can
cause performance and stability problems under load.
Redundant interconnect usage cannot protect network adapters used for public
communication. If you require high availability or load balancing for public adapters,
then use a third party solution. Typically, bonding, trunking or similar technologies can
be used for this purpose.
You can enable redundant interconnect usage for the private network by selecting
multiple network adapters to use as private adapters. Redundant interconnect usage
creates a redundant interconnect when you identify more than one network adapter as
private.
5-26
6
Configuring Users, Groups and
Environments for Oracle Grid Infrastructure
and Oracle Database
Before installation, create operating system groups and users, and configure user
environments.
• Creating Groups, Users and Paths for Oracle Grid Infrastructure
Log in as root, and use the following instructions to locate or create the Oracle
Inventory group, and create a software owner for Oracle Grid Infrastructure, and
directories for Oracle home.
• Oracle Installations with Standard and Job Role Separation Groups and Users
A job role separation configuration of Oracle Database and Oracle ASM is a
configuration with groups and users to provide separate groups for operating
system authentication.
• Creating Operating System Privileges Groups
The following sections describe how to create operating system groups for Oracle
Grid Infrastructure and Oracle Database:
• Creating Operating System Oracle Installation User Accounts
Before starting installation, create Oracle software owner user accounts, and
configure their environments.
• Configuring Grid Infrastructure Software Owner User Environments
Understand the software owner user environments to configure before installing
Oracle Grid Infrastructure.
• About Using Oracle Solaris Projects
For Oracle Grid Infrastructure 18c, if you want to use Oracle Solaris Projects to
manage system resources, you can specify different default projects for different
Oracle installation owners.
• Enabling Intelligent Platform Management Interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common
interfaces to computer hardware and firmware that system administrators can use
to monitor system health and manage the system.
6-1
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
system administrator. If you have system administration privileges, then review the
topics in this section and configure operating system groups and users as needed.
• Determining If an Oracle Inventory and Oracle Inventory Group Exist
Determine if you have an existing Oracle central inventory, and ensure that you
use the same Oracle Inventory for all Oracle software installations. Also, ensure
that all Oracle software users you intend to use for installation have permissions to
write to this directory.
• Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
Create an Oracle Inventory group manually as part of a planned installation,
particularly where more than one Oracle software product is installed on servers.
• About Oracle Installation Owner Accounts
Select or create an Oracle installation owner for your installation, depending on the
group and user management plan you want to use for your installations.
• Restrictions for Oracle Software Installation Owners
Review the following restrictions for users created to own Oracle software.
• Identifying an Oracle Software Owner User Account
You must create at least one software owner user account the first time you install
Oracle software on the system. Either use an existing Oracle software user
account, or create an Oracle software owner user account for your installation.
• About the Oracle Base Directory for the grid User
Review this information about creating the Oracle base directory on each cluster
node.
• About the Oracle Home Directory for Oracle Grid Infrastructure Software
Review this information about creating the Oracle home directory location on each
cluster node.
inventory_loc=central_inventory_location
inst_group=group
Use the more command to determine if you have an Oracle central inventory on your
system. For example:
6-2
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
# more /var/opt/oracle/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
Use the command grep groupname /etc/group to confirm that the group
specified as the Oracle Inventory group still exists on the system. For example:
Note:
Do not put the oraInventory directory under the Oracle base directory for a
new installation, because that can result in user permission errors for other
installations.
6-3
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
installation owner, but you can have different Oracle users to own different
installations.
Oracle software owners must have the Oracle Inventory group as their primary group,
so that each Oracle software installation owner can write to the central inventory
(oraInventory), and so that OCR and Oracle Clusterware resource permissions are set
correctly. The database software owner must also have the OSDBA group and (if you
create them) the OSOPER, OSBACKUPDBA, OSDGDBA, OSRACDBA, and
OSKMDBA groups as secondary groups.
$ ORACLE_HOME=/u01/app/18.0.0/grid;
export ORACLE_HOME
• If you try to administer an Oracle home or Grid home instance using sqlplus,
lsnrctl, or asmcmd commands while the environment variable $ORACLE_HOME is set
to a different Oracle home or Grid home path, then you encounter errors. For
example, when you start SRVCTL from a database home, $ORACLE_HOME should
be set to that database home, or SRVCTL fails. The exception is when you are
using SRVCTL in the Oracle Grid Infrastructure home. In that case, $ORACLE_HOME
is ignored, and the Oracle home environment variable does not affect SRVCTL
commands. In all other cases, you must change $ORACLE_HOME to the instance that
you want to administer.
• To create separate Oracle software owners and separate operating system
privileges groups for different Oracle software installations, note that each of these
users must have the Oracle central inventory group (oraInventory group) as their
primary group. Members of this group are granted the OINSTALL system
privileges to write to the Oracle central inventory (oraInventory) directory, and are
also granted permissions for various Oracle Clusterware resources, OCR keys,
6-4
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
directories in the Oracle Clusterware home to which DBAs need write access, and
other necessary privileges. Members of this group are also granted execute
permissions to start and stop Clusterware infrastructure resources and databases.
In Oracle documentation, this group is represented as oinstall in code examples.
• Each Oracle software owner must be a member of the same central inventory
oraInventory group, and they must have this group as their primary group, so that
all Oracle software installation owners share the same OINSTALL system
privileges. Oracle recommends that you do not have more than one central
inventory for Oracle installations. If an Oracle software owner has a different
central inventory group, then you may corrupt the central inventory.
You can then use the ID command to verify that the Oracle installation owners you
intend to use have the Oracle Inventory group as their primary group. For example:
$ id -a oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),
54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),
54330(racdba)
$ id -a grid
uid=54331(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),
54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)
For Oracle Restart installations, to successfully install Oracle Database, ensure that
the grid user is a member of the racdba group.
After you create operating system groups, create or modify Oracle user accounts in
accordance with your operating system authentication planning.
6-5
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
However, in the case of an Oracle Grid Infrastructure installation, you must create a
different path, so that the path for Oracle bases remains available for other Oracle
installations.
For OUI to recognize the Oracle base path, it must be in the form u[00-99][00-99]/
app, and it must be writable by any member of the oraInventory (oinstall) group. The
OFA path for the Oracle base is u[00-99][00-99]/app/user, where user is the name
of the software installation owner. For example:
/u01/app/grid
/u01/app/18.0.0/grid
During installation, ownership of the entire path to the Grid home is changed to root (/
u01, /u01/app, /u01/app/18.0.0, /u01/app/18.0.0/grid). If you do not create a
unique path to the Grid home, then after the Grid install, you can encounter permission
errors for other installations, including any existing installations under the same path.
To avoid placing the application directory in the mount point under root ownership, you
can create and select paths such as the following for the Grid home:
/u01/18.0.0/grid
6-6
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Caution:
For Oracle Grid Infrastructure for a cluster installations, note the following
restrictions for the Oracle Grid Infrastructure binary home (Grid home
directory for Oracle Grid Infrastructure):
• It must not be placed under one of the Oracle base directories, including
the Oracle base directory of the Oracle Grid Infrastructure installation
owner.
• It must not be placed in the home directory of an installation owner.
These requirements are specific to Oracle Grid Infrastructure for a
cluster installations.
Oracle Grid Infrastructure for a standalone server (Oracle Restart) can be
installed under the Oracle base for the Oracle Database installation.
6-7
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Note:
To configure users for installation that are on a network directory service
such as Network Information Services (NIS), refer to your directory service
documentation.
Related Topics
• Oracle Database Administrator’s Guide
• Oracle Automatic Storage Management Administrator's Guide
6-8
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
6-9
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Create this group if you want a separate group of operating system users to have
a limited set of privileges for encryption key management such as Oracle Wallet
Manager management (the SYSKM privilege). To use this privilege, add the
Oracle Database installation owners as members of this group.
• The OSRACDBA group for Oracle Real Application Clusters Administration
(typically, racdba)
Create this group if you want a separate group of operating system users to have
a limited set of Oracle Real Application Clusters (RAC) administrative privileges
(the SYSRAC privilege). To use this privilege:
– Add the Oracle Database installation owners as members of this group.
– For Oracle Restart configurations, if you have a separate Oracle Grid
Infrastructure installation owner user (grid), then you must also add the grid
user as a member of the OSRACDBA group of the database to enable Oracle
Grid Infrastructure components to connect to the database.
Related Topics
• Oracle Database Administrator’s Guide
• Oracle Database Security Guide
6-10
Chapter 6
Creating Operating System Privileges Groups
SYSDBA privileges for each database, then you should create a group whose
members are granted the OSASM/SYSASM administrative privileges, and create
a grid infrastructure user (grid) that does not own a database installation, so that
you separate Oracle Grid Infrastructure SYSASM administrative privileges from a
database administrative privileges group.
Members of the OSASM group can use SQL to connect to an Oracle ASM
instance as SYSASM using operating system authentication. The SYSASM
privileges permit mounting and dismounting disk groups, and other storage
administration tasks. SYSASM privileges provide no access privileges on an
RDBMS instance.
If you do not designate a separate group as the OSASM group, but you do define
an OSDBA group for database administration, then by default the OSDBA group
you define is also defined as the OSASM group.
• The OSOPER group for Oracle ASM (typically, asmoper)
This is an optional group. Create this group if you want a separate group of
operating system users to have a limited set of Oracle instance administrative
privileges (the SYSOPER for ASM privilege), including starting up and stopping
the Oracle ASM instance. By default, members of the OSASM group also have all
privileges granted by the SYSOPER for ASM privilege.
6-11
Chapter 6
Creating Operating System Privileges Groups
The Oracle ASM instance is managed by a privileged role called SYSASM, which
grants full access to Oracle ASM disk groups. Members of OSASM are granted the
SYSASM role.
6-12
Chapter 6
Creating Operating System Privileges Groups
6-13
Chapter 6
Creating Operating System Oracle Installation User Accounts
6-14
Chapter 6
Creating Operating System Oracle Installation User Accounts
for ASM, ASMADMIN, or other system privileges group, then modify the group
settings for that user before installation.
• Identifying Existing User and Group IDs
To create identical users and groups, you must identify the user ID and group IDs
assigned them on the node where you created them, and then create the user and
groups with the same name and ID on the other cluster nodes.
• Creating Identical Database Users and Groups on Other Cluster Nodes
Oracle software owner users and the Oracle Inventory, OSDBA, and OSOPER
groups must exist and be identical on all cluster nodes.
• Example of Creating Role-allocated Groups, Users, and Paths
Understand this example of how to create role-allocated groups and users that is
compliant with an Optimal Flexible Architecture (OFA) deployment.
• Example of Creating Minimal Groups, Users, and Paths
You can create a minimal operating system authentication configuration as
described in this example.
You must note the user ID number for installation users, because you need it during
preinstallation.
For Oracle Grid Infrastructure installations, user IDs and group IDs must be identical
on all candidate nodes.
Warning:
Each Oracle software owner must be a member of the same central
inventory group. Do not modify the primary group of an existing Oracle
software owner account, or designate different groups as the OINSTALL
group. If Oracle software owner accounts have different groups as their
primary group, then you can corrupt the central inventory.
6-15
Chapter 6
Creating Operating System Oracle Installation User Accounts
During installation, the user that is installing the software should have the OINSTALL
group as its primary group, and it must be a member of the operating system groups
appropriate for your installation. For example:
# /usr/sbin/usermod -g oinstall -G
dba,asmdba,backupdba,dgdba,kmdba,racdba[,oper] oracle
# id -a oracle
2. From the output, identify the user ID (uid) for the user and the group identities
(gids) for the groups to which it belongs.
Ensure that these ID numbers are identical on each node of the cluster. The user's
primary group is listed after gid. Secondary groups are listed after groups.
6-16
Chapter 6
Creating Operating System Oracle Installation User Accounts
Note:
You are not required to use the UIDs and GIDs in this example. If a
group already exists, then use the groupmod command to modify it if
necessary. If you cannot use the same group ID for a particular group on
a node, then view the /etc/group file on all nodes to identify a group ID
that is available on every node. You must then change the group ID on
all nodes to the same group ID.
3. To create the Oracle Grid Infrastructure (grid) user, enter a command similar to
the following:
• The -u option specifies the user ID, which must be the user ID that you
identified earlier.
• The -g option specifies the primary group for the Grid user, which must be the
Oracle Inventory group (OINSTALL), which grants the OINSTALL system
privileges. In this example, the OINSTALL group is oinstall.
• The -G option specifies the secondary groups. The Grid user must be a
member of the OSASM group (asmadmin) and the OSDBA for ASM group
(asmdba).
Note:
If the user already exists, then use the usermod command to modify it if
necessary. If you cannot use the same user ID for the user on every
node, then view the /etc/passwd file on all nodes to identify a user ID
that is available on every node. You must then specify that ID for the
user on all of the nodes.
# passwd grid
6-17
Chapter 6
Creating Operating System Oracle Installation User Accounts
After running these commands, you have a set of administrative privileges groups and
users for Oracle Grid Infrastructure, and for two separate Oracle databases (DB1 and
DB2):
6-18
Chapter 6
Creating Operating System Oracle Installation User Accounts
• An Oracle Database software owner (oracle1), which owns the Oracle Database
binaries for DB1. The oracle1 user has the oraInventory group as its primary
group, and the OSDBA group for its database (dba1) and the OSDBA for ASM
group for Oracle Grid Infrastructure (asmdba) as secondary groups. In addition, the
oracle1 user is a member of asmoper, granting that user privileges to start up and
shut down Oracle ASM.
• An OSDBA group (dba1). During installation, you identify the group dba1 as the
OSDBA group for the database installed by the user oracle1. Members of dba1
are granted the SYSDBA privileges for the Oracle Database DB1. Users who
connect as SYSDBA are identified as user SYS on DB1.
• An OSBACKUPDBA group (backupdba1). During installation, you identify the
group backupdba1 as the OSDBA group for the database installed by the user
oracle1. Members of backupdba1 are granted the SYSBACKUP privileges for the
database installed by the user oracle1 to back up the database.
6-19
Chapter 6
Creating Operating System Oracle Installation User Accounts
• An OSDGDBA group (dgdba1). During installation, you identify the group dgdba1
as the OSDGDBA group for the database installed by the user oracle1. Members
of dgdba1 are granted the SYSDG privileges to administer Oracle Data Guard for
the database installed by the user oracle1.
• An OSKMDBA group (kmdba1). During installation, you identify the group kmdba1
as the OSKMDBA group for the database installed by the user oracle1. Members
of kmdba1 are granted the SYSKM privileges to administer encryption keys for the
database installed by the user oracle1.
• An OSOPER group (oper1). During installation, you identify the group oper1 as
the OSOPER group for the database installed by the user oracle1. Members of
oper1 are granted the SYSOPER privileges (a limited set of the SYSDBA
privileges), including the right to start up and shut down the DB1 database. Users
who connect as OSOPER privileges are identified as user PUBLIC on DB1.
• An Oracle base /u01/app/oracle1 owned by oracle1:oinstall with 775
permissions. The user oracle1 has permissions to install software in this directory,
but in no other directory in the /u01/app path.
Example 6-3 Oracle Database DB2 Groups and Users Example
The command creates the following Oracle Database (DB2) groups and users:
• An Oracle Database software owner (oracle2), which owns the Oracle Database
binaries for DB2. The oracle2 user has the oraInventory group as its primary
group, and the OSDBA group for its database (dba2) and the OSDBA for ASM
group for Oracle Grid Infrastructure (asmdba) as secondary groups. However, the
oracle2 user is not a member of the asmoper group, so oracle2 cannot shut down
or start up Oracle ASM.
• An OSDBA group (dba2). During installation, you identify the group dba2 as the
OSDBA group for the database installed by the user oracle2. Members of dba2
are granted the SYSDBA privileges for the Oracle Database DB2. Users who
connect as SYSDBA are identified as user SYS on DB2.
• An OSBACKUPDBA group (backupdba2). During installation, you identify the
group backupdba2 as the OSDBA group for the database installed by the user
oracle2. Members of backupdba2 are granted the SYSBACKUP privileges for the
database installed by the user oracle2 to back up the database.
• An OSDGDBA group (dgdba2). During installation, you identify the group dgdba2
as the OSDGDBA group for the database installed by the user oracle2. Members
of dgdba2 are granted the SYSDG privileges to administer Oracle Data Guard for
the database installed by the user oracle2.
• An OSKMDBA group (kmdba2). During installation, you identify the group kmdba2
as the OSKMDBA group for the database installed by the user oracle2. Members
of kmdba2 are granted the SYSKM privileges to administer encryption keys for the
database installed by the user oracle2.
• An OSOPER group (oper2). During installation, you identify the group oper2 as
the OSOPER group for the database installed by the user oracle2. Members of
oper2 are granted the SYSOPER privileges (a limited set of the SYSDBA
privileges), including the right to start up and shut down the DB2 database. Users
who connect as OSOPER privileges are identified as user PUBLIC on DB2.
6-20
Chapter 6
Creating Operating System Oracle Installation User Accounts
After running these commands, you have the following groups and users:
• An Oracle central inventory group, or oraInventory group (oinstall). Members
who have the central inventory group as their primary group, are granted the
OINSTALL permission to write to the oraInventory directory.
• One system privileges group, dba, for Oracle Grid Infrastructure, Oracle ASM and
Oracle Database system privileges. Members who have the dba group as their
primary or secondary group are granted operating system authentication for
OSASM/SYSASM, OSDBA/SYSDBA, OSOPER/SYSOPER, OSBACKUPDBA/
SYSBACKUP, OSDGDBA/SYSDG, OSKMDBA/SYSKM, OSDBA for ASM/
SYSDBA for ASM, and OSOPER for ASM/SYSOPER for Oracle ASM to
administer Oracle Clusterware, Oracle ASM, and Oracle Database, and are
granted SYSASM and OSOPER for Oracle ASM access to the Oracle ASM
storage.
• An Oracle Grid Infrastructure for a cluster owner, or Grid user (grid), with the
oraInventory group (oinstall) as its primary group, and with the OSASM group
(dba) as the secondary group, with its Oracle base directory /u01/app/grid.
6-21
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
• An Oracle Database owner (oracle) with the oraInventory group (oinstall) as its
primary group, and the OSDBA group (dba) as its secondary group, with its Oracle
base directory /u01/app/oracle.
• /u01/app owned by grid:oinstall with 775 permissions before installation, and
by root after the root.sh script is run during installation. This ownership and
permissions enables OUI to create the Oracle Inventory directory, in the
path /u01/app/oraInventory.
• /u01 owned by grid:oinstall before installation, and by root after the root.sh
script is run during installation.
• /u01/app/18.0.0/grid owned by grid:oinstall with 775 permissions. These
permissions are required for installation, and are changed during the installation
process.
• /u01/app/grid owned by grid:oinstall with 775 permissions. These
permissions are required for installation, and are changed during the installation
process.
• /u01/app/oracle owned by oracle:oinstall with 775 permissions.
Note:
You can use one installation owner for both Oracle Grid Infrastructure and
any other Oracle installations. However, Oracle recommends that you use
separate installation owner accounts for each Oracle software installation.
6-22
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
Caution:
If you have existing Oracle installations that you installed with the user ID
that is your Oracle Grid Infrastructure software owner, then unset all Oracle
environment variable settings for that user.
$ xhost + hostname
3. If you are not logged in as the software owner user, then switch to the software
owner user you are configuring. For example, with the user grid:
$ su - grid
$ sudo -u grid -s
4. To determine the default shell for the user, enter the following command:
$ echo $SHELL
6-23
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
$ vi .bash_profile
$ vi .profile
% vi .login
6. Enter or edit the following line, specifying a value of 022 for the default file mode
creation mask:
umask 022
$ . ./.bash_profile
$ . ./.profile
• C shell:
% source ./.login
10. Use the following command to check the PATH environment variable:
$ echo $PATH
$ export DISPLAY=local_host:0.0
• C shell:
6-24
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
In this example, local_host is the host name or IP address of the system (your
workstation, or another client) on which you want to display the installer.
12. If the /tmp directory has less than 1 GB of free space, then identify a file system
with at least 1 GB of free space and set the TMP and TMPDIR environment variables
to specify a temporary directory on this file system:
Note:
You cannot use a shared file system as the location of the temporary file
directory (typically /tmp) for Oracle RAC installations. If you place /tmp
on a shared file system, then the installation fails.
a. Use the df -h command to identify a suitable file system with sufficient free
space.
b. If necessary, enter commands similar to the following to create a temporary
directory on the file system that you identified, and set the appropriate
permissions on the directory:
$ sudo - s
# mkdir /mount_point/tmp
# chmod 775 /mount_point/tmp
# exit
c. Enter commands similar to the following to set the TMP and TMPDIR
environment variables:
Bourne, Bash, or Korn shell:
$ TMP=/mount_point/tmp
$ TMPDIR=/mount_point/tmp
$ export TMP TMPDIR
C shell:
13. To verify that the environment has been set correctly, enter the following
commands:
$ umask
$ env | more
Verify that the umask command displays a value of 22, 022, or 0022 and that the
environment variables you set in this section have the correct values.
6-25
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
$ ulimit -Sn
1024
$ ulimit -Hn
65536
3. Check the soft and hard limits for the number of processes available to a user.
Ensure that the result is in the recommended range. For example:
$ ulimit -Su
2047
$ ulimit -Hu
16384
6-26
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
4. Check the soft limit for the stack setting. Ensure that the result is in the
recommended range. For example:
$ ulimit -Ss
10240
$ ulimit -Hs
32768
Note:
If you make changes to an Oracle installation user account and that user
account is logged in, then changes to the limits.conf file do not take effect
until you log these users out and log them back in. You must do this before
you use these accounts for installation.
Remote Display
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell
For example, if you are using the Bash shell and if your host name is local_host, then
enter the following command:
$ export DISPLAY=node1:0
X11 Forwarding
To ensure that X11 forwarding does not cause the installation to fail, use the following
procedure to create a user-level SSH client configuration file for Oracle installation
owner user accounts:
1. Using any text editor, edit or create the software installation owner's ~/.ssh/
config file.
6-27
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
2. Ensure that the ForwardX11 attribute in the ~/.ssh/config file is set to no. For
example:
Host *
ForwardX11 no
3. Ensure that the permissions on ~/.ssh are secured to the Oracle installation
owner user account. For example:
$ ls -al .ssh
total 28
drwx------ 2 grid oinstall 4096 Jun 21 2015
drwx------ 19 grid oinstall 4096 Jun 21 2015
-rw-r--r-- 1 grid oinstall 1202 Jun 21 2015 authorized_keys
-rwx------ 1 grid oinstall 668 Jun 21 2015 id_dsa
-rwx------ 1 grid oinstall 601 Jun 21 2015 id_dsa.pub
-rwx------ 1 grid oinstall 1610 Jun 21 2015 known_hosts
if [ -t 0 ]; then
stty intr ^C
fi
C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif
Note:
If the remote shell can load hidden files that contain stty commands, then
OUI indicates an error and stops the installation.
6-28
Chapter 6
About Using Oracle Solaris Projects
6-29
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
• Some server platforms put their network interfaces into a power saving mode
when they are powered off. In this case, they may operate only at a lower link
speed (for example, 100 MB, instead of 1 GB). For these platforms, the network
switch port to which the BMC is connected must be able to auto-negotiate down to
the lower speed, or IPMI does not function properly.
• Install and configure IPMI firmware patches.
Note:
IPMI operates on the physical hardware platform through the network
interface of the baseboard management controller (BMC). Depending on
your system configuration, an IPMI-initiated restart of a server can affect all
virtual environments hosted on the server. Contact your hardware and OS
vendor for more information.
Note:
If you configure IPMI, and you use Grid Naming Service (GNS) you still must
configure separate addresses for the IPMI interfaces. As the IPMI adapter is
not seen directly by the host, the IPMI adapter is not visible to GNS as an
address on the host.
6-30
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
Refer to the documentation for the configuration option you select for details about
configuring the BMC.
Note:
Problems in the initial revisions of Oracle Solaris software and firmware
prevented IPMI support from working properly. Ensure you have the latest
firmware for your platform and the following Oracle Solaris patches (or later
versions), available from the following URL:
http://www.oracle.com/technetwork/systems/patches/firmware/
index.html
6-31
7
Supported Storage Options for Oracle
Database and Oracle Grid Infrastructure
Review supported storage options as part of your installation planning process.
• Supported Storage Options for Oracle Grid Infrastructure
The following table shows the storage options supported for Oracle Grid
Infrastructure binaries and files:
• Oracle ACFS and Oracle ADVM
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
extends Oracle ASM technology to support of all of your application data in both
single instance and cluster configurations.
• Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
For all installations, you must choose the storage option to use for Oracle Grid
Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application
Clusters (Oracle RAC) databases.
• Guidelines for Using Oracle ASM Disk Groups for Storage
Plan how you want to configure Oracle ASM disk groups for deployment.
• Guidelines for Using a Network File System with Oracle ASM
During Oracle Grid Infrastructure installation, you have the option of configuring
Oracle ASM on a Network File System.
• Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
Oracle Grid Infrastructure and Oracle RAC only support cluster-aware volume
managers.
• About NFS Storage for Data Files
Review this section for NFS storage configuration guidelines.
• About Direct NFS Client Mounts to NFS Storage Devices
Direct NFS Client integrates the NFS client functionality directly in the Oracle
software to optimize the I/O path between Oracle and the NFS server. This
integration can provide significant performance improvements.
7-1
Chapter 7
Supported Storage Options for Oracle Grid Infrastructure
7-2
Chapter 7
Oracle ACFS and Oracle ADVM
Table 7-1 (Cont.) Supported Storage Options for Oracle Grid Infrastructure
7-3
Chapter 7
Oracle ACFS and Oracle ADVM
Table 7-2 Platforms That Support Oracle ACFS and Oracle ADVM
See Also:
7-4
Chapter 7
Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
• Starting with Oracle Grid Infrastructure 12c Release 1 (12.1) for a cluster, creating
Oracle data files on an Oracle ACFS file system is supported.
• You can store Oracle Database binaries, data files, and administrative files (for
example, trace files) on Oracle ACFS.
7-5
Chapter 7
Guidelines for Using Oracle ASM Disk Groups for Storage
You can use NFS, with or without Direct NFS, to store Oracle Database data files. You
cannot use NFS as storage for Oracle Clusterware files.
Note:
If you install Oracle Database or Oracle RAC after you install Oracle Grid
Infrastructure, then you can either use the same disk group for database files, OCR,
and voting files, or you can use different disk groups. If you create multiple disk groups
before installing Oracle RAC or before creating a database, then you can do one of the
following:
• Place the data files in the same disk group as the Oracle Clusterware files.
• Use the same Oracle ASM disk group for data files and recovery files.
• Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting files, database
files, and recovery files are contained in the one disk group. If you create multiple disk
groups for storage, then you can place files in different disk groups.
With Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database
Configuration Assistant (DBCA) does not have the functionality to create disk groups
for Oracle ASM.
7-6
Chapter 7
Guidelines for Using a Network File System with Oracle ASM
See Also:
Oracle Automatic Storage Management Administrator's Guide for information
about creating disk groups
Note:
All storage products must be supported by both your server and storage
vendors.
7-7
Chapter 7
About NFS Storage for Data Files
Note:
The performance of Oracle software and databases stored on NAS devices
depends on the performance of the network connection between the servers
and the network-attached storage devices.For better performance, Oracle
recommends that you connect servers to NAS devices using private
dedicated network connections. NFS network connections should use
Gigabit Ethernet or better.
7-8
Chapter 7
About Direct NFS Client Mounts to NFS Storage Devices
Note:
You can have only one active Direct NFS Client implementation for each
instance. Using Direct NFS Client on an instance prevents another Direct
NFS Client implementation.
See Also:
7-9
8
Configuring Storage for Oracle Grid
Infrastructure
Complete these procedures to configure Oracle Automatic Storage Management
(Oracle ASM) for Oracle Grid Infrastructure for a cluster.
Oracle Grid Infrastructure for a cluster provides system support for Oracle Database.
Oracle ASM is a volume manager and a file system for Oracle database files that
supports single-instance Oracle Database and Oracle Real Application Clusters
(Oracle RAC) configurations. Oracle Automatic Storage Management also supports a
general purpose file system for your application needs, including Oracle Database
binaries. Oracle Automatic Storage Management is Oracle's recommended storage
management solution. It provides an alternative to conventional volume managers and
file systems.
Note:
Oracle ASM is the supported storage management solution for Oracle
Cluster Registry (OCR) and Oracle Clusterware voting files. The OCR is a
file that contains the configuration information and status of the cluster. The
installer automatically initializes the OCR during the Oracle Clusterware
installation. Database Configuration Assistant uses the OCR for storing the
configurations for the cluster databases that it creates.
8-1
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
8-2
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
1. Plan your Oracle ASM disk groups requirement, based on the cluster configuration
you want to deploy. Oracle Domain Services Clusters store Oracle Clusterware
files and the Grid Infrastructure Management Repository (GIMR) on separate
Oracle ASM disk groups and hence require configuration of two separate Oracle
ASM disk groups, one for OCR and voting files and the other for the GIMR.
2. Determine whether you want to use Oracle ASM for Oracle Database files,
recovery files, and Oracle Database binaries. Oracle Database files include data
files, control files, redo log files, the server parameter file, and the password file.
Note:
• You do not have to use the same storage mechanism for Oracle
Database files and recovery files. You can use a shared file system
for one file type and Oracle ASM for the other.
• There are two types of Oracle Clusterware files: OCR files and
voting files. You must use Oracle ASM to store OCR and voting files.
• If your database files are stored on a shared file system, then you
can continue to use the same for database files, instead of moving
them to Oracle ASM storage.
3. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
Except when using external redundancy, Oracle ASM mirrors all Oracle
Clusterware files in separate failure groups within a disk group. A quorum failure
group, a special type of failure group, contains mirror copies of voting files when
voting files are stored in normal or high redundancy disk groups. The disk groups
that contain Oracle Clusterware files (OCR and voting files) have a higher
minimum number of failure groups than other disk groups because the voting files
are stored in quorum failure groups in the Oracle ASM disk group.
A quorum failure group is a special type of failure group that is used to store the
Oracle Clusterware voting files. The quorum failure group is used to ensure that a
quorum of the specified failure groups are available. When Oracle ASM mounts a
disk group that contains Oracle Clusterware files, the quorum failure group is used
to determine if the disk group can be mounted in the event of the loss of one or
more failure groups. Disks in the quorum failure group do not contain user data,
therefore a quorum failure group is not considered when determining redundancy
requirements in respect to storing user data.
The redundancy levels are as follows:
• High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to
increase performance and provide the highest level of reliability. A high
redundancy disk group requires a minimum of three disk devices (or three
failure groups). The effective disk space in a high redundancy disk group is
one-third the sum of the disk space in all of its devices.
For Oracle Clusterware files, a high redundancy disk group requires a
minimum of five disk devices and provides five voting files and one OCR (one
primary and two secondary copies). For example, your deployment may
consist of three regular failure groups and two quorum failure groups. Note
that not all failure groups can be quorum failure groups, even though voting
8-3
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
files need all five disks. With high redundancy, the cluster can survive the loss
of two failure groups.
While high redundancy disk groups do provide a high level of data protection,
you should consider the greater cost of additional storage devices before
deciding to select high redundancy disk groups.
• Normal redundancy
In a normal redundancy disk group, to increase performance and reliability,
Oracle ASM by default uses two-way mirroring. A normal redundancy disk
group requires a minimum of two disk devices (or two failure groups). The
effective disk space in a normal redundancy disk group is half the sum of the
disk space in all of its devices.
For Oracle Clusterware files, a normal redundancy disk group requires a
minimum of three disk devices and provides three voting files and one OCR
(one primary and one secondary copy). For example, your deployment may
consist of two regular failure groups and one quorum failure group. With
normal redundancy, the cluster can survive the loss of one failure group.
If you are not using a storage array providing independent protection against
data loss for storage, then Oracle recommends that you select normal
redundancy.
• External redundancy
An external redundancy disk group requires a minimum of one disk device.
The effective disk space in an external redundancy disk group is the sum of
the disk space in all of its devices.
Because Oracle ASM does not mirror data in an external redundancy disk
group, Oracle recommends that you use external redundancy with storage
devices such as RAID, or other similar devices that provide their own data
protection mechanisms.
• Flex redundancy
A flex redundancy disk group is a type of redundancy disk group with features
such as flexible file redundancy, mirror splitting, and redundancy change. A
flex disk group can consolidate files with different redundancy requirements
into a single disk group. It also provides the capability for databases to change
the redundancy of its files. A disk group is a collection of file groups, each
associated with one database. A quota group defines the maximum storage
space or quota limit of a group of databases within a disk group.
In a flex redundancy disk group, Oracle ASM uses three-way mirroring of
Oracle ASM metadata to increase performance and provide reliability. For
database data, you can choose no mirroring (unprotected), two-way mirroring
(mirrored), or three-way mirroring (high). A flex redundancy disk group
requires a minimum of three disk devices (or three failure groups).
• Extended redundancy
Extended redundancy disk group has similar features as the flex redundancy
disk group. Extended redundancy is available when you configure an Oracle
Extended Cluster. Extended redundancy extends Oracle ASM data protection
to cover failure of sites by placing enough copies of data in different failure
groups of each site. A site is a collection of failure groups. For extended
redundancy with three sites, for example, two data sites, and one quorum
failure group, the minimum number of disks is seven (three disks each for two
data sites and one quorum failure group outside the two data sites). The
8-4
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
See Also:
Oracle Automatic Storage Management Administrator's Guide for more
information about file groups and quota groups for flex disk groups
Note:
You can alter the redundancy level of the disk group after a disk group is
created. For example, you can convert a normal or high redundancy disk
group to a flex redundancy disk group. Within a flex redundancy disk
group, file redundancy can change among three possible values:
unprotected, mirrored, or high.
4. Determine the total amount of disk space that you require for Oracle Clusterware
files, and for the database files and recovery files.
If an Oracle ASM instance is running on the system, then you can use an existing
disk group to meet these storage requirements. If necessary, you can add disks to
an existing disk group during the database installation.
See Oracle Clusterware Storage Space Requirements to determine the minimum
number of disks and the minimum disk space requirements for installing Oracle
Clusterware files, and installing the starter database, where you have voting files
in a separate disk group.
5. Determine an allocation unit size.
Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is
the fundamental unit of allocation within a disk group. You can select the AU Size
value from 1, 2, 4, 8, 16, 32, or 64 MB, depending on the specific disk group
compatibility level. For flex disk groups, the default value for AU size is set to 4
MB. For external, normal, and high redundancies, the default AU size is 1 MB.
6. For Oracle Clusterware installations, you must also add additional disk space for
the Oracle ASM metadata. You can use the following formula to calculate the disk
space requirements (in MB) for OCR and voting files, and the Oracle ASM
metadata:
8-5
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Note:
You can define custom failure groups during installation of Oracle Grid
Infrastructure. You can also define failure groups after installation using
the GUI tool ASMCA, the command line tool asmcmd, or SQL
commands. If you define custom failure groups, then you must specify a
minimum of two failure groups for normal redundancy disk groups and
three failure groups for high redundancy disk groups.
8. If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
• The disk devices must be owned by the user performing Oracle Grid
Infrastructure installation.
• All the devices in an Oracle ASM disk group must be the same size and have
the same performance characteristics.
• Do not specify multiple partitions on a single physical disk as a disk group
device. Oracle ASM expects each disk group device to be on a separate
physical disk.
• Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of
complexity that is unnecessary with Oracle ASM. Oracle recommends that if
you choose to use a logical volume manager, then use the logical volume
manager to represent a single logical unit number (LUN) without striping or
mirroring, so that you can minimize the effect on storage performance of the
additional storage layer.
8-6
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
9. If you use Oracle ASM disk groups created on Network File System (NFS) for
storage, then ensure that you follow recommendations described in Guidelines for
Configuring Oracle ASM Disk Groups on NFS.
Related Topics
• Storage Checklist for Oracle Grid Infrastructure
Review the checklist for storage hardware and configuration requirements for
Oracle Grid Infrastructure installation.
• Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum
disk space requirements based on the redundancy type, for installing Oracle
Clusterware files, and installing the starter database, for various Oracle Cluster
deployments.
Table 8-1 Oracle ASM Disk Space Minimum Requirements for Oracle Database
Redundancy Level Minimum number of Data Files Recovery Files Total Storage
disks
External 1 4.5 GB 12.9 GB 17.4 GB
Normal 2 8.6 GB 25.8 GB 34.4 GB
High/Flex/Extended 3 12.9 GB 38.7 GB 51.6 GB
Table 8-2 Oracle ASM Disk Space Minimum Requirements for Oracle Database (non-CDB)
Redundancy Level Minimum number of Data Files Recovery Files Total Storage
disks
External 1 2.7 GB 7.8 GB 10.5 GB
Normal 2 5.2 GB 15.6 GB 20.8 GB
High/Flex/Extended 3 7.8 GB 23.4 GB 31.2 GB
8-7
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Note:
The DATA disk group stores OCR and voting files, and the MGMT disk group
stores GIMR and Oracle Clusterware backup files.
Redund DATA Disk MGMT Disk Group Rapid Home Total Storage
ancy Group Provisioning
Level
External 1 GB 28 GB 1 GB 30 GB
Each node beyond four:
5 GB
Normal 2 GB 56 GB 2 GB 60 GB
Each node beyond four:
5 GB
High/ 3 GB 84 GB 3 GB 90 GB
Flex/ Each node beyond four:
Extende
5 GB
d
• Oracle recommends that you use separate disk groups for Oracle Clusterware
files and for GIMR and Oracle Clusterware backup files.
• The initial GIMR sizing for the Oracle Standalone Cluster is for up to four
nodes. You must add additional storage space to the disk group containing the
GIMR and Oracle Clusterware backup files for each new node added to the
cluster.
• By default, all new Oracle Standalone Cluster deployments are configured with
RHP for patching that cluster only. This deployment requires a minimal ACFS file
system that is automatically configured in the same disk group as the GIMR.
Table 8-4 Minimum Available Space Requirements for Oracle Member Cluster
with Local ASM
• For Oracle Member Cluster, the storage space for the GIMR is pre-allocated in the
centralized GIMR on the Oracle Domain Services Cluster as described in Table 8–
5.
• Oracle recommends that you use separate disk groups for Oracle Clusterware
files and for Oracle Clusterware backup files.
8-8
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Table 8-5 Minimum Available Space Requirements for Oracle Domain Services
Cluster
Redundancy DATA Disk MGMT Disk Trace File Total Storage Additional
Level Group Group Analyzer Oracle
Member
Cluster
External 1 GB and 1 140 GB 200 GB 345 GB RHP: 100 GB
GB for each (excluding GIMR for each
Oracle RHP) Oracle
Member Member
Cluster Cluster
beyond four:
28 GB
Normal 2 GB and 2 280 GB 400 GB 690 GB RHP: 200 GB
GB for each (excluding GIMR for each
Oracle RHP) Oracle
Member Member
Cluster Cluster
beyond four:
56 GB
High/Flex/ 3 GB and 3 420 GB 600 GB 1035 GB RHP: 300 GB
Extended GB for each (excluding GIMR for each
Oracle RHP) Oracle
Member Member
Cluster Cluster
beyond four:
84 GB
• By default, the initial space allocation for the GIMR is for the Oracle Domain
Services Cluster and four or fewer Oracle Member Clusters. You must add
additional storage space for each Oracle Member Cluster beyond four.
• At the time of installation, TFA storage space requirements are evaluated to
ensure the growth to the maximum sizing is possible. Only the minimum space is
allocated for the ACFS file system, which extends automatically up to the
maximum value, as required.
• Oracle recommends that you pre-allocate storage space for largest foreseeable
configuration of the Oracle Domain Services Cluster according to the guidelines in
the above table.
8-9
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Related Topics
• About Oracle Standalone Clusters
An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure services and
Oracle ASM locally and requires direct access to shared storage.
• About Oracle Cluster Domain and Oracle Domain Services Cluster
An Oracle Cluster Domain is a choice of deployment architecture for new clusters,
introduced in Oracle Clusterware 12c Release 2.
• About Oracle Member Clusters
Oracle Member Clusters use centralized services from the Oracle Domain
Services Cluster and can host databases or applications. Oracle Member Clusters
can be of two types - Oracle Member Clusters for Oracle Databases or Oracle
Member Clusters for applications.
$ $ORACLE_HOME/bin/asmcmd
ASMCMD> startup
2. Enter one of the following commands to view the existing disk groups, their
redundancy level, and the amount of free disk space in each one:
ASMCMD> lsdg
or
$ORACLE_HOME/bin/asmcmd -p lsdg
8-10
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
The lsdg command lists information about mounted disk groups only.
3. From the output, identify a disk group with the appropriate redundancy level and
note the free space that it contains.
4. If necessary, install or identify the additional disk devices required to meet the
storage requirements for your installation.
Note:
If you are adding devices to an existing disk group, then Oracle recommends
that you use devices that have the same size and performance
characteristics as the existing devices in that disk group.
8-11
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
• Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of complexity
that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster
logical volume manager in case you decide to use a logical volume with Oracle
ASM and Oracle RAC.
See Also:
1. If necessary, create an exported directory for the disk group files on the NAS
device.
2. Switch user to root.
3. Create a mount point directory on the local system.
For example:
# mkdir -p /mnt/oracleasm
4. To ensure that the NFS file system is mounted when the system restarts, add an
entry for the file system in the mount file /etc/fstab.
8-12
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
See Also:
My Oracle Support Note 359515.1 for updated NAS mount option
information:
https://support.oracle.com/CSP/main/article?
cmd=show&type=NOT&id=359515.1
5. Enter a command similar to the following to mount the NFS on the local system:
# mount /mnt/oracleasm
6. Choose a name for the disk group to create, and place it under the mount point.
For example, if you want to set up a disk group for a sales database:
# mkdir /mnt/oracleasm/sales1
7. Use commands similar to the following to create the required number of zero-
padded files in this directory:
# dd if=/dev/zero
of=/mnt/oracleasm/sales1/disk1 bs=1024k
count=1000
This example creates 1 GB files on the NFS file system. You must create one,
two, or three files respectively to create an external, normal, or high redundancy
disk group.
Note:
Creating multiple zero-padded files on the same NAS device does not
guard against NAS failure. Instead, create one file for each NAS device
and mirror them using the Oracle ASM technology.
8. Enter commands similar to the following to change the owner, group, and
permissions on the directory and files that you created:
In this example, the installation owner is grid and the OSASM group is asmadmin.
9. During Oracle Database installations, edit the Oracle ASM disk discovery string to
specify a regular expression that matches the file names you created.
For example:
/mnt/oracleasm/sales1/
8-13
Chapter 8
Configuring Storage Device Path Persistence Using Oracle ASMFD
WARNING:
When you configure Oracle ASM, including Oracle ASMFD, do not modify or
erase the contents of the Oracle ASM disks, or modify any files, including the
configuration files.
Note:
Oracle ASMFD is supported on Linux x86–64 and Oracle Solaris operating
systems.
8-14
Chapter 8
Using Disk Groups with Oracle Database Files on Oracle ASM
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
• Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM
Identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Oracle ASM disk group devices.
• Creating Disk Groups for Oracle Database Data Files
If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group.
• Creating Directories for Oracle Database Files
You can store Oracle Database and recovery files on a separate file system from
the configuration files.
8-15
Chapter 8
Using Disk Groups with Oracle Database Files on Oracle ASM
Note:
If you define custom failure groups, then you must specify a minimum of two
failure groups for normal redundancy and three failure groups for high
redundancy.
See Also:
Oracle Automatic Storage Management Administrator's Guide for information
about Oracle ASM disk discovery
1. Use the following command to determine the free disk space on each mounted file
system:
# df -h
8-16
Chapter 8
Configuring File System Storage for Oracle Database
Option Description
Database Files Select one of the following:
• A single file system with at least 1.5 GB
of free disk space
• Two or more file systems with at least
3.5 GB of free disk space in total
Recovery Files Choose a file system with at least 2 GB of
free disk space
If you are using the same file system for multiple file types, then add the disk
space requirements for each type to determine the total disk space requirement.
3. Note the names of the mount point directories for the file systems that you
identified.
4. If the user performing installation has permissions to create directories on the
disks where you plan to install Oracle Database, then DBCA creates the Oracle
Database file directory, and the Recovery file directory. If the user performing
installation does not have write access, then you must create these directories
manually.
For example, given the user oracle and Oracle Inventory Group oinstall, and
using the paths /u03/oradata/wrk_area for Oracle Database files,
and /u01/oradata/rcv_area for the recovery area, these commands create
the recommended subdirectories in each of the mount point directories and set the
appropriate owner, group, and permissions on them:
• Database file directory:
# mkdir /u01/oradata/
# chown oracle:oinstall /u01/oradata/
# chmod 775 /mount_point/oradata
# mkdir /u01/oradata/rcv_area
# chown oracle:oinstall /u01/app/oracle/fast_recovery_area
# chmod 775 /u01/oradata/rcv_area
8-17
Chapter 8
Configuring File System Storage for Oracle Database
If you plan to place storage on Network File System (NFS) protocol devices, then
Oracle recommends that you use Oracle Direct NFS (dNFS) to take advantage of
performance optimizations built into the Oracle Direct NFS client.
• Configuring NFS Buffer Size Parameters for Oracle Database
Set the values for the NFS buffer size parameters rsize and wsize to at least
32768.
• Checking TCP Network Protocol Buffer for Direct NFS Client
Check your TCP network buffer size to ensure that it is adequate for the speed of
your servers.
• Creating an oranfstab File for Direct NFS Client
Direct NFS uses a configuration file, oranfstab, to determine the available
mount points.
• Enabling and Disabling Direct NFS Client Control of NFS
Use these commands to enable or disable Direct NFS Client Oracle Disk Manager
Control of NFS.
• Enabling Hybrid Columnar Compression on Direct NFS Client
Perform these steps to enable Hybrid Columnar Compression (HCC) on Direct
NFS Client:
Related Topics
• My Oracle Support note 1496040.1
Direct NFS Client issues writes at wtmax granularity to the NFS server.
Related Topics
• My Oracle Support note 359515.1
8-18
Chapter 8
Configuring File System Storage for Oracle Database
Oracle recommends that you set the value based on the link speed of your servers.
For example:
On Oracle Solaris 10:
Additionally, check your TCP send window size and TCP receive window size to
ensure that they are adequate for the speed of your servers.
To check the current TCP send window size and TCP receive window size on Oracle
Solaris 10:
To check the current TCP send window size and TCP receive window size on Oracle
Solaris 11:
Oracle recommends that you set the value based on the link speed of your servers.
For example:
On Oracle Solaris 10:
8-19
Chapter 8
Configuring File System Storage for Oracle Database
8-20
Chapter 8
Configuring File System Storage for Oracle Database
krb5i: Direct NFS runs with Kerberos authentication and NFS integrity. Server
is authenticated and each of the message transfers is checked for integrity.
krb5p: Direct NFS runs with Kerberos authentication and NFS privacy. Server
is authenticated, and all data is completely encrypted.
The security parameter, if specified, takes precedence over the
security_default parameter. If neither of these parameters are specified, then
sys is the default authentication.
For NFS server Kerberos security setup, review the relevant NFS server
documentation. For Kerberos client setup, review the relevant operating system
documentation.
• dontroute
Specifies that outgoing messages should not be routed by the operating system,
but instead sent using the IP address to which they are bound.
Note:
The dontroute option is a POSIX option, which sometimes does not
work on Linux systems with multiple paths in the same subnet.
• management
Enables Direct NFS Client to use the management interface for SNMP queries.
You can use this parameter if SNMP is running on separate management
interfaces on the NFS server. The default value is the server parameter value.
• community
Specifies the community string for use in SNMP queries. Default value is public.
The following examples show three possible NFS server entries in oranfstab. A single
oranfstab can have multiple NFS server entries.
server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
path: 192.0.100.1
export: /vol/oradata1 mount: /mnt/oradata1
Example 8-2 Using Local and Path in the Same Subnet, with dontroute
Local and path in the same subnet, where dontroute is specified:
server: MyDataServer2
local: 192.0.2.0
path: 192.0.2.128
local: 192.0.2.1
path: 192.0.2.129
8-21
Chapter 8
Configuring File System Storage for Oracle Database
dontroute
export: /vol/oradata2 mount: /mnt/oradata2
server: MyDataServer3
local: LocalPath1
path: NfsPath1
local: LocalPath2
path: NfsPath2
local: LocalPath3
path: NfsPath3
local: LocalPath4
path: NfsPath4
dontroute
export: /vol/oradata3 mount: /mnt/oradata3
export: /vol/oradata4 mount: /mnt/oradata4
export: /vol/oradata5 mount: /mnt/oradata5
export: /vol/oradata6 mount: /mnt/oradata6
management: MgmtPath1
community: private
server: nfsserver
local: 192.0.2.0
path: 192.0.2.2
local: 192.0.2.3
path: 192.0.2.4
export: /private/oracle1/logs mount: /logs security: krb5
export: /private/oracle1/data mount: /data security: krb5p
export: /private/oracle1/archive mount: /archive security: sys
export: /private/oracle1/data1 mount: /data1
security_default: krb5i
8-22
Chapter 8
Creating Member Cluster Manifest File for Oracle Member Clusters
Note:
If you remove an NFS path that an Oracle Database is using, then you must
restart the database for the change to take effect.
2. If SNMP is enabled on an interface other than the NFS server, then configure
oranfstab using the management parameter.
3. If SNMP is configured using a community string other than public, then configure
oranfstab file using the community parameter.
4. Ensure that libnetsnmp.so is installed by checking if snmpget is available.
2. From the Grid home on the Oracle Domain Services Cluster, create the member
cluster manifest file:
cd Grid_home/bin
./crsctl create member_cluster_configuration member_cluster_name
-file cluster_manifest_file_name -member_type database|application
8-23
Chapter 8
Configuring Oracle Automatic Storage Management Cluster File System
[-version member_cluster_version
[-domain_services [asm_storage local|direct|indirect][rhp] [acfs]]
$ cd /u01/app/18.0.0/grid
8-24
Chapter 8
Configuring Oracle Automatic Storage Management Cluster File System
3. Ensure that the Oracle Grid Infrastructure installation owner has read and write
permissions on the storage mountpoint you want to use. For example, if you want
to use the mountpoint /u02/acfsmounts/:
$ ls -l /u02/acfsmounts
4. Start Oracle ASM Configuration Assistant as the grid installation owner. For
example:
./asmca
5. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk
group you created during installation. Click the ASM Cluster File Systems tab.
6. On the ASM Cluster File Systems page, right-click the Data disk, then select
Create ACFS for Database Use.
7. In the Create ACFS for Database window, enter the following information:
• Volume Name: Enter the name of the database home. The name must be
unique in your enterprise. For example: dbase_01
• Mount Point: Enter the directory path for the mount point. For
example: /u02/acfsmounts/dbase_01
Make a note of this mount point for future reference.
• Size (GB): Enter in gigabytes the size you want the database home to be. The
default is 12 GB and the minimum recommended size.
• Owner Name: Enter the name of the Oracle Database installation owner you
plan to use to install the database. For example: oracle1
• Owner Group: Enter the OSDBA group whose members you plan to provide
when you install the database. Members of this group are given operating
system authentication for the SYSDBA privileges on the database. For
example: dba1
Select Automatically run configuration commands to run ASMCA configuration
commands automatically. To use this option, you must provide the root credentials
on the ASMCA Settings page.
Click OK when you have completed your entries.
8. If you did not select to run configuration commands automatically, then run the
script generated by Oracle ASM Configuration Assistant as a privileged user
(root). On an Oracle Clusterware environment, the script registers the ACFS as a
resource managed by Oracle Clusterware. Registering ACFS as a resource helps
Oracle Clusterware to mount ACFS automatically in proper order when ACFS is
used for an Oracle RAC Oracle Database home.
9. During Oracle RAC installation, ensure that you or the DBA who installs Oracle
RAC selects for the Oracle home the mount point you provided in the Mount Point
field (in the preceding example, /u02/acfsmounts/dbase_01).
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
8-25
9
Installing Oracle Grid Infrastructure
Review this information for installation and deployment options for Oracle Grid
Infrastructure.
Oracle Database and Oracle Grid Infrastructure installation software is available in
multiple media, and can be installed using several options. The Oracle Grid
Infrastructure software is available as an image, available for download from the
Oracle Technology Network website, or the Oracle Software Delivery Cloud portal. In
most cases, you use the graphical user interface (GUI) provided by Oracle Universal
Installer to install the software. You can also use Oracle Universal Installer to complete
silent mode installations, without using the GUI. You can also use Rapid Home
Provisioning for subsequent Oracle Grid Infrastructure and Oracle Database
deployments.
• About Image-Based Oracle Grid Infrastructure Installation
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and
configuration of Oracle Grid Infrastructure software is simplified with image-based
installation.
• Setup Wizard Installation Options for Creating Images
Before you start the setup wizards for your Oracle Database or Oracle Grid
Infrastructure installation, decide if you want to use any of the available image-
creation options.
• Understanding Cluster Configuration Options
Review these topics to understand the cluster configuration options available in
Oracle Grid Infrastructure 18c.
• Installing Oracle Grid Infrastructure for a New Cluster
Review these procedures to install the cluster configuration options available in
this release of Oracle Grid Infrastructure.
• Installing Oracle Grid Infrastructure Using a Cluster Configuration File
During installation of Oracle Grid Infrastructure, you have the option of either
providing cluster configuration information manually, or using a cluster
configuration file.
• Installing Only the Oracle Grid Infrastructure Software
This installation option requires manual postinstallation steps to enable the Oracle
Grid Infrastructure software.
• About Deploying Oracle Grid Infrastructure Using Rapid Home Provisioning and
Maintenance
Rapid Home Provisioning and Maintenance (RHP) is a software lifecycle
management method for provisioning and maintaining Oracle homes. RHP
enables mass deployment and maintenance of standard operating environments
for databases, clusters, and user-defined software types.
• Confirming Oracle Clusterware Function
After Oracle Grid Infrastructure installation, confirm that your Oracle Clusterware
installation is installed and running correctly.
9-1
Chapter 9
About Image-Based Oracle Grid Infrastructure Installation
Note:
You must extract the image software into the directory where you want your
Grid home to be located, and then run the %ORACLE_HOME%\gridSetup.sh
script to start the Oracle Grid Infrastructure Setup Wizard. Ensure that the
Grid home directory path you create is in compliance with the Oracle Optimal
Flexible Architecture recommendations.
Related Topics
• Installing Oracle Grid Infrastructure for a New Cluster
Review these procedures to install the cluster configuration options available in
this release of Oracle Grid Infrastructure.
9-2
Chapter 9
Setup Wizard Installation Options for Creating Images
Option Description
-createGoldImage Creates a gold image from the current Oracle home.
-destinationLocation Specify the complete path, or location, where the gold image will
be created.
-exclFiles Specify the complete paths to the files to be excluded from the
newly created gold image.
—help Displays help for all the available options.
For example:
Where:
/tmp/my_db_images is a temporary file location where the image zip file is created.
9-3
Chapter 9
Understanding Cluster Configuration Options
9-4
Chapter 9
Understanding Cluster Configuration Options
Private Network
SAN
Oracle ASM Network Storage
Related Topics
• About Oracle Member Clusters
Oracle Member Clusters use centralized services from the Oracle Domain
Services Cluster and can host databases or applications. Oracle Member Clusters
can be of two types - Oracle Member Clusters for Oracle Databases or Oracle
Member Clusters for applications.
9-5
Chapter 9
Understanding Cluster Configuration Options
Oracle Member Clusters do not need direct connectivity to shared disks. Using the
shared Oracle ASM service, they can leverage network connectivity to the IO Service
or the ACFS Remote Service to access a centrally managed pool of storage. To use
shared Oracle ASM services from the Oracle Domain Services Cluster, the member
cluster needs connectivity to the Oracle ASM networks of the Oracle Domain Services
Cluster.
Oracle Member Clusters cannot provide services to other clusters. For example, you
cannot configure and use a member cluster as a GNS server or Rapid Home
Provisioning Server.
Note:
Before running Oracle Universal Installer, you must specify the Oracle
Domain Services Cluster configuration details for the Oracle Member Cluster
by creating the Member Cluster Manifest file.
Oracle Member Cluster for Oracle Database does not support Oracle
Database 12.1 or earlier, where Oracle Member Cluster is configured with
Oracle ASM storage as direct or indirect.
Related Topics
• Creating Member Cluster Manifest File for Oracle Member Clusters
Create a Member Cluster Manifest file to specify the Oracle Member Cluster
configuration for the Grid Infrastructure Management Repository (GIMR), Grid
9-6
Chapter 9
Understanding Cluster Configuration Options
Naming Service, Oracle ASM storage server, and Rapid Home Provisioning
configuration.
Table 9-2 Oracle ASM Disk Group Redundancy Levels for Oracle Extended
Clusters with 2 Data Sites
Related Topics
• Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure
Review this information to convert to an Oracle Extended Cluster after upgrading
Oracle Grid Infrastructure. Oracle Extended Cluster enables you to deploy Oracle
RAC databases on a cluster, in which some of the nodes are located in different
sites.
• Oracle Clusterware Administration and Deployment Guide
9-7
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
These installation instructions assume you do not already have any Oracle
software installed on your system. If you have already installed Oracle
ASMLIB, then you cannot install Oracle ASM Filter Driver (Oracle ASMFD)
until you uninstall Oracle ASMLIB. You can use Oracle ASMLIB instead of
Oracle ASMFD for managing the disks used by Oracle ASM.
Install Oracle Grid Infrastructure software for Oracle Standalone Cluster using the
following procedure:
9-8
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
1. As the grid user, download the Oracle Grid Infrastructure image files and extract
the files into the Grid home. For example:
mkdir -p /u01/app/18.0.0/grid
chown grid:oinstall /u01/app/18.0.0/grid
cd /u01/app/18.0.0/grid
unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
2. Configure the shared disks for use with Oracle ASM Filter Driver:
a. Log in as the root user and set the environment variable ORACLE_HOME to the
location of the Grid home.
For C shell:
su root
setenv ORACLE_HOME /u01/app/18.0.0/grid
su root
export ORACLE_HOME=/u01/app/18.0.0/grid
b. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
for use with Oracle ASM Filter Driver.
# cd /u01/app/18.0.0/grid/bin
# ./asmcmd afd_label DATA1 /dev/rdsk/cXtYdZsA --init
# ./asmcmd afd_label DATA2 /dev/rdsk/cXtYdZsB --init
# ./asmcmd afd_label DATA3 /dev/rdsk/cXtYdZsC --init
c. Verify the device has been marked for use with Oracle ASMFD.
3. Log in as the grid user, and start the Oracle Grid Infrastructure installer by
running the following command:
/u01/app/18.0.0/grid/gridSetup.sh
9-9
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
The installer starts and the Select Configuration Option window appears.
4. Choose the option Configure Grid Infrastructure for a New Cluster, then click
Next.
The Select Cluster Configuration window appears.
5. Choose the option Configure an Oracle Standalone Cluster, then click Next.
Select the Configure as Extended Cluster option to extend an Oracle RAC
cluster across two or more separate sites, each equipped with its own storage.
The Grid Plug and Play Information window appears.
6. In the Cluster Name and SCAN Name fields, enter the names for your cluster and
cluster scan that are unique throughout your entire enterprise network.
You can select Configure GNS if you have configured your domain name server
(DNS) to send to the GNS virtual IP address name resolution requests for the
subdomain GNS serves, as explained in this guide.
For cluster member node public and VIP network addresses, provide the
information required depending on the kind of cluster you are configuring:
• If you plan to use automatic cluster configuration with DHCP addresses
configured and resolved through GNS, then you only need to provide the GNS
VIP names as configured on your DNS.
• If you plan to use manual cluster configuration, with fixed IP addresses
configured and resolved on your DNS, then provide the SCAN names for the
cluster, and the public names, and VIP names for each cluster member node.
For example, you can choose a name that is based on the node names'
common prefix. The cluster name can be mycluster and the cluster SCAN
name can be mycluster-scan.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your
local node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, OUI automatically fills in public and VIP fields. If your
system uses vendor clusterware, then OUI may fill additional fields.
• Host names and virtual host names are not domain-qualified. If you provide a
domain in the address field during installation, then OUI removes the domain
from the address.
• Interfaces identified as private for private IP addresses should not be
accessible as public interfaces. Using public interfaces for Cache Fusion can
cause performance problems.
• When you enter the public node name, use the primary host name of each
node. In other words, use the name displayed by the /bin/hostname
command.
a. Click Add to add another node to the cluster.
b. Enter the second node's public name (node2), and virtual IP name (node2-
vip), then click OK. Provide the virtual IP (VIP) host name for all cluster
nodes, or none.
9-10
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
You are returned to the Cluster Node Information window. You should now
see all nodes listed in the table of cluster nodes. Make sure the Role column is
set to HUB for both nodes. To add Leaf Nodes, you must configure GNS.
c. Make sure all nodes are selected, then click the SSH Connectivity button at
the bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
d. Enter the operating system user name and password for the Oracle software
owner (grid). If you have configured SSH connectivity between the nodes,
then select the Reuse private and public keys existing in user home
option. Click Setup.
A message window appears, indicating that it might take several minutes to
configure SSH connectivity between the nodes. After a short period, another
message window appears indicating that passwordless SSH connectivity has
been established between the cluster nodes. Click OK to continue.
e. When returned to the Cluster Node Information window, click Next to continue.
The Specify Network Interface Usage window appears.
8. Select the usage type for each network interface displayed.
Verify that each interface has the correct interface type associated with it. If you
have network interfaces that should not be used by Oracle Clusterware, then set
the network interface type to Do Not Use. For example, if you have only two
network interfaces, then set the public interface to have a Use for value of Public
and set the private network interface to have a Use for value of ASM & Private.
Click Next. The Storage Option Information window appears.
9. Select the Oracle ASM storage configuration option:
a. If you select Configure ASM using block devices, then specify the NFS
mount points for the Oracle ASM disk groups, and optionally, the GIMR disk
group in the Specify NFS Locations for ASM Disk Groups window.
b. If you select Configure ASM on NAS, then click Next. The Grid Infrastructure
Management Repository Option window appears.
10. Choose whether you want to store the Grid Infrastructure Management Repository
in a separate Oracle ASM disk group, then click Next.
The Create ASM Disk Group window appears.
11. Provide the name and specifications for the Oracle ASM disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example
DATA.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
c. In the Add Disks section, choose the disks to add to this disk group.
In the Add Disks section you should see the disks that you labeled in Step 2. If
you do not see the disks, click the Change Discovery Path button and
provide a path and pattern match for the disk, for example, /dev/sd*
During installation, disks labelled as Oracle ASMFD disks or Oracle ASMLIB
disks are listed as candidate disks when using the default discovery string.
However, if the disk has a header status of MEMBER, then it is not a
candidate disk.
9-11
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
d. If you want to use Oracle ASM Filter Driver (Oracle ASMFD) to manage your
Oracle ASM disk devices, then select the option Configure Oracle ASM Filter
Driver.
If you are installing on Linux systems, and you want to use Oracle ASM Filter
Driver (Oracle ASMFD) to manage your Oracle ASM disk devices, then you
must deinstall Oracle ASM library driver (Oracle ASMLIB) before starting
Oracle Grid Infrastructure installation.
When you have finished providing the information for the disk group, click Next.
12. If you selected to use a different disk group for the GIMR, then the Grid
Infrastructure Management Repository Option window appears. Provide the name
and specifications for the GIMR disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example
DATA.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
c. In the Add Disks section, choose the disks to add to this disk group.
When you have finished providing the information for the disk group, click Next.
The Specify ASM Password window appears.
13. Choose the same password for the Oracle ASM SYS and ASMSNMP account, or
specify different passwords for each account, then click Next.
The Failure Isolation Support window appears.
14. Select the option Do not use Intelligent Platform Management Interface (IPMI),
then click Next.
The Specify Management Options window appears.
15. If you have Enterprise Manager Cloud Control installed in your enterprise, then
choose the option Register with Enterprise Manager (EM) Cloud Control and
provide the EM configuration information. If you do not have Enterprise Manager
Cloud Control installed in your enterprise, then click Next to continue.
The Privileged Operating System Groups window appears.
16. Accept the default operating system group names for Oracle ASM administration
and click Next.
The Specify Install Location window appears.
17. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure
installation, then click Next. The Oracle base directory must be different from the
Oracle home directory.
If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid
home directory as directed in Step 1, then the default location for the Oracle base
directory should display as /u01/app/grid.
If you have not installed Oracle software previously on this computer, then the
Create Inventory window appears.
18. Change the path for the inventory directory, if required. Then, click Next.
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
9-12
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
The installer displays a progress indicator enabling you to monitor the installation
process.
22. If you did not configure automation of the root scripts, then you are required to run
certain scripts as the root user, as specified in the Execute Configuration Scripts
window. Do not click OK until you have run all the scripts. Run the scripts on all
nodes as directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity,
the examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command
to run the first script on node1:
d. Enter the password for the root user, and then enter the following command
to run the first script on node2:
[root@node2 oraInventory]#./orainstRoot.sh
9-13
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
You must run the root.sh script on the first node and wait for it to
finish. You can run root.sh scripts concurrently on all other nodes
except for the last node on which you run the script. Like the first
node, the root.sh script on the last node must be run separately.
f. After the root.sh script finishes on node1, go to the terminal window you
opened in part c of this step. As the root user on node2, enter the following
commands:
After the root.sh script completes, return to the Oracle Universal Installer
window where the Installer prompted you to run the orainstRoot.sh and
root.sh scripts. Click OK.
The software installation monitoring window reappears.
23. Continue monitoring the installation until the Finish window appears. Then click
Close to complete the installation process and exit the installer.
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while
Oracle software is running on the server. If you remove these files, then the
Oracle software can encounter intermittent hangs. Oracle Clusterware
installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon.
After your Oracle Grid Infrastructure installation is complete, you can install Oracle
Database on a cluster node for high availability, or install Oracle RAC.
9-14
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
See Also:
Oracle Real Application Clusters Installation Guide or Oracle Database
Installation Guide for your platform for information on installing Oracle
Database
Note:
These installation instructions assume you do not already have any Oracle
software installed on your system. If you have already installed Oracle
ASMLIB, then you cannot install Oracle ASM Filter Driver (Oracle ASMFD)
until you uninstall Oracle ASMLIB. You can use Oracle ASMLIB instead of
Oracle ASMFD for managing the disks used by Oracle ASM.
mkdir -p /u01/app/18.0.0/grid
chown grid:oinstall /u01/app/18.0.0/grid
cd /u01/app/18.0.0/grid
unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
9-15
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
2. Configure the shared disks for use with Oracle ASM Filter Driver:
a. Log in as the root user and set the environment variable ORACLE_HOME to the
location of the Grid home.
For C shell:
su root
setenv ORACLE_HOME /u01/app/18.0.0/grid
su root
export ORACLE_HOME=/u01/app/18.0.0/grid
b. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
for use with Oracle ASM Filter Driver.
# cd /u01/app/18.0.0/grid/bin
# ./asmcmd afd_label DATA1 /dev/rdsk/cXtYdZsA --init
# ./asmcmd afd_label DATA2 /dev/rdsk/cXtYdZsB --init
# ./asmcmd afd_label DATA3 /dev/rdsk/cXtYdZsC --init
c. Verify the device has been marked for use with Oracle ASMFD.
3. Log in as the grid user, and start the Oracle Grid Infrastructure installer by
running the following command:
/u01/app/18.0.0/grid/gridSetup.sh
The installer starts and the Select Configuration Option window appears.
4. Choose the option Configure Grid Infrastructure for a New Cluster, then click
Next.
The Select Cluster Configuration window appears.
5. Choose the option Configure an Oracle Domain Services Cluster, then click
Next.
The Grid Plug and Play Information window appears.
9-16
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
6. In the Cluster Name and SCAN Name fields, enter the names for your cluster and
cluster scan that are unique throughout your entire enterprise network.
You can select Configure GNS if you have configured your domain name server
(DNS) to send to the GNS virtual IP address name resolution requests for the
subdomain GNS serves, as explained in this guide.
For cluster member node public and VIP network addresses, provide the
information required depending on the kind of cluster you are configuring:
• If you plan to use automatic cluster configuration with DHCP addresses
configured and resolved through GNS, then you only need to provide the GNS
VIP names as configured on your DNS.
• If you plan to use manual cluster configuration, with fixed IP addresses
configured and resolved on your DNS, then provide the SCAN names for the
cluster, and the public names, and VIP names for each cluster member node.
For example, you can choose a name that is based on the node names'
common prefix. This example uses the cluster name mycluster and the
cluster SCAN name of mycluster-scan.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your
local node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, OUI automatically fills in public and VIP fields. If your
system uses vendor clusterware, then OUI may fill additional fields.
• Host names and virtual host names are not domain-qualified. If you provide a
domain in the address field during installation, then OUI removes the domain
from the address.
• Interfaces identified as private for private IP addresses should not be
accessible as public interfaces. Using public interfaces for Cache Fusion can
cause performance problems.
• When you enter the public node name, use the primary host name of each
node. In other words, use the name displayed by the /bin/hostname
command.
a. Click Add to add another node to the cluster.
b. Enter the second node's public name (node2), and virtual IP name (node2-
vip), then click OK.Provide the virtual IP (VIP) host name for all cluster nodes,
or none.
You are returned to the Cluster Node Information window. You should now
see all nodes listed in the table of cluster nodes. Make sure the Role column is
set to HUB for both nodes. To add Leaf Nodes, you must configure GNS.
c. Make sure all nodes are selected, then click the SSH Connectivity button at
the bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
d. Enter the operating system user name and password for the Oracle software
owner (grid). If you have configured SSH connectivity between the nodes,
then select the Reuse private and public keys existing in user home
option. Click Setup.
9-17
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
9-18
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
The Root Script Execution Configuration window appears.
17. Select the option to Automatically run configuration scripts. Enter the
credentials for the root user or a sudo account, then click Next.
Alternatively, you can Run the scripts manually as the root user at the end of the
installation process when prompted by the installer.
The Perform Prerequisite Checks window appears.
18. If any of the checks have a status of Failed and are not Fixable, then you must
manually correct these issues. After you have fixed the issue, you can click the
Check Again button to have the installer recheck the requirement and update the
status. Repeat as needed until all the checks have a status of Succeeded. Click
Next.
The Summary window appears.
9-19
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
19. Review the contents of the Summary window and then click Install.
The installer displays a progress indicator enabling you to monitor the installation
process.
20. If you did not configure automation of the root scripts, then you are required to run
certain scripts as the root user, as specified in the Execute Configuration Scripts
window. Do not click OK until you have run all the scripts. Run the scripts on all
nodes as directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity,
the examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command
to run the first script on node1:
d. Enter the password for the root user, and then enter the following command
to run the first script on node2:
9-20
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
You must run the root.sh script on the first node and wait for it to
finish. f your cluster has three or more nodes, then root.sh can be
run concurrently on all nodes but the first. Node numbers are
assigned according to the order of running root.sh. If you want to
create a particular node number assignment, then run the root
scripts in the order of the node assignments you want to make, and
wait for the script to finish running on each node before proceeding
to run the script on the next node. However, Oracle system identifier,
or SID, for your Oracle RAC databases, do not follow the node
numbers.
f. After the root.sh script finishes on node1, go to the terminal window you
opened in part c of this step. As the root user on node2, enter the following
commands:
After the root.sh script completes, return to the OUI window where the
Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click
OK.
The software installation monitoring window reappears.
When you run root.sh during Oracle Grid Infrastructure installation, the Trace File
Analyzer (TFA) Collector is also installed in the directory.grid_home/tfa.
21. After root.sh runs on all the nodes, OUI runs Net Configuration Assistant (netca)
and Cluster Verification Utility. These programs run without user intervention.
22. During the installation, Oracle Automatic Storage Management Configuration
Assistant (asmca) configures Oracle ASM for storage.
23. Continue monitoring the installation until the Finish window appears. Then click
Close to complete the installation process and exit the installer.
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while
Oracle software is running on the server. If you remove these files, then the
Oracle software can encounter intermittent hangs. Oracle Clusterware
installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon.
After your Oracle Domain Services Cluster installation is complete, you can install
Oracle Member Clusters for Oracle Databases and Oracle Member Clusters for
Applications.
9-21
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
These installation instructions assume you do not already have any Oracle
software installed on your system. If you have already installed Oracle
ASMLIB, then you cannot install Oracle ASM Filter Driver (Oracle ASMFD)
until you uninstall Oracle ASMLIB. You can use Oracle ASMLIB instead of
Oracle ASMFD for managing the disks used by Oracle ASM.
To install the software for Oracle Member Cluster for Oracle Databases or
Applications
You must create a Member Cluster Manifest File as explained in this guide before
performing the installation.
Use this procedure to install an Oracle Member Cluster for Oracle Databases or
Oracle Member Cluster for Applications.
1. As the grid user, download the Oracle Grid Infrastructure image files and extract
the files into the Grid home. For example:
mkdir -p /u01/app/18.0.0/grid
chown grid:oinstall /u01/app/18.0.0/grid
cd /u01/app/18.0.0/grid
unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
9-22
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
2. Log in as the grid user, and start the Oracle Grid Infrastructure installer by
running the following command:
/u01/app/18.0.0/grid/gridSetup.sh
The installer starts and the Select Configuration Option window appears.
3. Choose the option Configure Grid Infrastructure for a New Cluster, then click
Next.
The Select Cluster Configuration window appears.
4. Choose either the Configure an Oracle Member Cluster for Oracle Databases
or Configure an Oracle Member Cluster for Applications option, then click
Next.
The Cluster Domain Services window appears.
5. Select the Manifest file that contains the configuration details about the
management repository and other services for the Oracle Member Cluster.
For Oracle Member Cluster for Oracle Databases, you can also specify the Grid
Naming Service and Oracle ASM Storage server details using a Member Cluster
Manifest file.
Click Next.
6. If you selected to configure an Oracle Member Cluster for applications, then the
Configure Virtual Access window appears. Provide a Cluster Name and optional
Virtual Host Name.
The virtual host name serves as a connection address for the Oracle Member
Cluster, and to provide service access to the software applications that you want
the Oracle Member Cluster to install and run.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your
local node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, Oracle Universal Installer (OUI) automatically fills in
public and VIP fields. If your system uses vendor clusterware, then OUI may
fill additional fields.
• Host names and virtual host names are not domain-qualified. If you provide a
domain in the address field during installation, then OUI removes the domain
from the address.
9-23
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
9-24
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
11. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure
installation, then click Next. The Oracle base directory must be different from the
Oracle home directory.
If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid
home directory as directed in Step 1, then the default location for the Oracle base
directory should display as /u01/app/grid.
If you have not installed Oracle software previously on this computer, then the
Create Inventory window appears.
12. Change the path for the inventory directory, if required. Then, click Next.
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
The Root Script Execution Configuration window appears.
13. Select the option to Automatically run configuration scripts. Enter the
credentials for the root user or a sudo account, then click Next.
Alternatively, you can Run the scripts manually as the root user at the end of the
installation process when prompted by the installer.
The Perform Prerequisite Checks window appears.
14. If any of the checks have a status of Failed and are not Fixable, then you must
manually correct these issues. After you have fixed the issue, you can click the
Check Again button to have the installer recheck the requirement and update the
status. Repeat as needed until all the checks have a status of Succeeded. Click
Next.
The Summary window appears.
15. Review the contents of the Summary window and then click Install.
The installer displays a progress indicator enabling you to monitor the installation
process.
16. If you did not configure automation of the root scripts, then you are required to run
certain scripts as the root user, as specified in the Execute Configuration Scripts
window appears. Do not click OK until you have run the scripts. Run the scripts on
all nodes as directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity,
the examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command
to run the first script on node1:
9-25
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
d. Enter the password for the root user, and then enter the following command
to run the first script on node2:
Note:
You must run the root.sh script on the first node and wait for it to
finish. f your cluster has three or more nodes, then root.sh can be
run concurrently on all nodes but the first. Node numbers are
assigned according to the order of running root.sh. If you want to
create a particular node number assignment, then run the root
scripts in the order of the node assignments you want to make, and
wait for the script to finish running on each node before proceeding
to run the script on the next node. However, Oracle system identifier,
or SID, for your Oracle RAC databases, do not follow the node
numbers.
f. After the root.sh script finishes on node1, go to the terminal window you
opened in part c of this step. As the root user on node2, enter the following
commands:
After the root.sh script completes, return to the OUI window where the
Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click
OK.
The software installation monitoring window reappears.
When you run root.sh during Oracle Grid Infrastructure installation, the Trace File
Analyzer (TFA) Collector is also installed in the directory.grid_home/tfa.
17. After root.sh runs on all the nodes, OUI runs Net Configuration Assistant (netca)
and Cluster Verification Utility. These programs run without user intervention.
9-26
Chapter 9
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
18. During installation of Oracle Member Cluster for Oracle Databases, if the Member
Cluster Manifest file does not include configuration details for Oracle ASM, then
Oracle Automatic Storage Management Configuration Assistant (asmca) configures
Oracle ASM for storage.
19. Continue monitoring the installation until the Finish window appears. Then click
Close to complete the installation process and exit the installer.
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while
Oracle software is running on the server. If you remove these files, then the
Oracle software can encounter intermittent hangs. Oracle Clusterware
installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon.
After your Oracle Grid Infrastructure installation is complete, you can install Oracle
Database on a cluster node for high availability, other applications, or install Oracle
RAC.
See Also:
Oracle Real Application Clusters Installation Guide or Oracle Database
Installation Guide for your platform for information on installing Oracle
Database
To create a cluster configuration file manually, start a text editor, and create a file that
provides the name of the public and virtual IP addresses for each cluster member
node, in the following format:
9-27
Chapter 9
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
.
.
node-role can have either HUB or LEAF as values. Specify the different nodes,
separating them with either spaces or colon (:).
For example:
mynode1:mynode1-vip:/HUB
mynode2:mynode2-vip:/LEAF
#
# Cluster nodes configuration specification file
#
# Format:
# node [vip] [role-identifier] [site-name]
#
# node - Node's public host name
# vip - Node's virtual host name
# role-identifier - Node's role with "/" prefix - should be "/HUB" or "/
LEAF"
# site-name - Node's assigned site
#
# Specify details of one node per line.
# Lines starting with '#' will be skipped.
#
# (1) vip and role are not required for Oracle Grid Infrastructure
software only
# installs and Oracle Member cluster for Applications
# (2) vip should be specified as AUTO if Node Virtual host names are
Dynamically
# assigned
# (3) role-identifier can be specified as "/LEAF" only for "Oracle
Standalone Cluster"
# (4) site-name should be specified only when configuring Oracle Grid
Infrastructure with "Extended Cluster" option
#
# Examples:
# --------
# For installing GI software only on a cluster:
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1
# node2
#
9-28
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
9-29
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
See Also:
Oracle Clusterware Administration and Deployment Guide for information
about cloning an Oracle Grid Infrastructure installation to other nodes that
were not included in the initial installation of Oracle Grid Infrastructure, and
then adding them to the cluster
8. Configure the cluster using the Oracle Universal Installer (OUI) configuration
wizard or response files.
Related Topics
• Configuring Software Binaries for Oracle Grid Infrastructure for a Cluster
Configure the software binaries by starting Oracle Grid Infrastructure configuration
wizard in GUI mode.
• Configuring the Software Binaries Using a Response File
When you install or copy Oracle Grid Infrastructure software on any node, you can
defer configuration for a later time. Review this procedure for completing
configuration after the software is installed or copied on nodes, using the
configuration wizard (gridSetup.sh).
• Installing Oracle Grid Infrastructure for a New Cluster
Review these procedures to install the cluster configuration options available in
this release of Oracle Grid Infrastructure.
9-30
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
$ ./gridSetup.sh
3. Provide information as needed for configuration. OUI validates the information and
configures the installation on all cluster nodes.
4. When you complete providing information, OUI shows you the Summary page,
listing the information you have provided for the cluster. Verify that the summary
has the correct information for your cluster, and click Install to start configuration
of the local node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
5. When prompted, run root scripts.
6. When you confirm that all root scripts are run, OUI checks the cluster
configuration status, and starts other configuration tools as needed.
To configure the Oracle Grid Infrastructure software binaries using a response file:
1. As the Oracle Grid Infrastructure installation owner (grid), start Oracle Universal
Installer in Oracle Grid Infrastructure configuration wizard mode from the Oracle
Grid Infrastructure software-only home using the following syntax, where filename
is the response file name:
For example:
$ cd /u01/app/18.0.0/grid
$ ./gridSetup.sh -responseFile /u01/app/grid/response/response_file.rsp
9-31
Chapter 9
About Deploying Oracle Grid Infrastructure Using Rapid Home Provisioning and Maintenance
installation, the configuration wizard mode validates inputs and configures the
installation on all cluster nodes.
2. When you complete configuring values, OUI shows you the Summary page, listing
all information you have provided for the cluster. Verify that the summary has the
correct information for your cluster, and click Install to start configuration of the
local node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
3. When prompted, run root scripts.
4. When you confirm that all root scripts are run, OUI checks the cluster configuration
status, and starts other configuration tools as needed.
./gridSetup.sh oracle_install_crs_Ping_Targets=Host1|IP1,Host2|IP2
The ping utility contacts the comma-separated list of host names or IP addresses
Host1|IP1,Host2|IP2 to determine whether the public network is available. If none of
the hosts respond, then the network is considered to be offline. Addresses outside the
cluster, like of a switch or router, should be used.
For example:
/gridSetup.sh oracle_install_crs_Ping_Targets=192.0.2.1,192.0.2.2
9-32
Chapter 9
About Deploying Oracle Grid Infrastructure Using Rapid Home Provisioning and Maintenance
9-33
Chapter 9
Confirming Oracle Clusterware Function
See Also:
Oracle Clusterware Administration and Deployment Guide for information
about setting up the Rapid Home Provisioning Server and Client, and for
creating and using gold images for provisioning and patching Oracle Grid
Infrastructure and Oracle Database homes.
For example:
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
9-34
Chapter 9
Confirming Oracle ASM Function for Oracle Clusterware Files
Note:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle
Clusterware is up. If you remove these files, then Oracle Clusterware could
encounter intermittent hangs, and you will encounter error CRS-0184: Cannot
communicate with the CRS daemon.
For example:
Note:
To manage Oracle ASM or Oracle Net 11g Release 2 (11.2) or later
installations, use the srvctl binary in the Oracle Grid Infrastructure home for
a cluster (Grid home). If you have Oracle Real Application Clusters or Oracle
Database installed, then you cannot use the srvctl binary in the database
home to manage Oracle ASM or Oracle Net.
9-35
Chapter 9
Understanding Offline Processes in Oracle Grid Infrastructure
OFFLINE after the installation of Oracle Grid Infrastructure. Run the following
command to view status of any resource:
9-36
10
Oracle Grid Infrastructure Postinstallation
Tasks
Complete configuration tasks after you install Oracle Grid Infrastructure.
You are required to complete some configuration tasks after Oracle Grid Infrastructure
is installed. In addition, Oracle recommends that you complete additional tasks
immediately after installation. You must also complete product-specific configuration
tasks before you use those products.
Note:
This chapter describes basic configuration only. Refer to product-specific
administration and tuning guides for more detailed configuration and tuning
information.
10-1
Chapter 10
Recommended Postinstallation Tasks
Note:
If you are not a My Oracle Support registered user, then click Register
for My Oracle Support and register.
Note:
Do not delete or move the .patch_storage directory even after you
have successfully installed the latest release update or release update
revision patch as OPatch stores the patch information in
the $ORACLE_HOME/.patch_storage directory.
Related Topics
• My Oracle Support note 2285040.1
10-2
Chapter 10
Recommended Postinstallation Tasks
2. Use the BMC management utility to obtain the BMC’s IP address and then use the
cluster control utility crsctl to store the BMC’s IP address in the Oracle Local
Registry (OLR) by issuing the crsctl set css ipmiaddr address command. For
example:
10-3
Chapter 10
Recommended Postinstallation Tasks
3. Enter the following crsctl command to store the user ID and password for the
resident BMC in the OLR, where youradminacct is the IPMI administrator user
account, and provide the password when prompted:
This command attempts to validate the credentials you enter by sending them to
another cluster node. The command fails if that cluster node is unable to access
the local BMC using the credentials.
When you store the IPMI credentials in the OLR, you must have the anonymous
user specified explicitly, or a parsing error will be reported.
10-4
Chapter 10
Recommended Postinstallation Tasks
You can also download and run the latest standalone version of Oracle ORAchk from
My Oracle Support. For information about downloading, configuring and running
Oracle ORAchk utility, refer to My Oracle Support note 1268927.2:
https://support.oracle.com/epmos/faces/DocContentDisplay?
id=1268927.2&parent=DOCUMENTATION&sourceId=USERGUIDE
Related Topics
• Oracle ORAchk and EXAchk User’s Guide
About the Fast Recovery Area and the Fast Recovery Area Disk Group
The fast recovery area is a unified storage location for all Oracle Database files related
to recovery. Enabling rapid backups for recent data can reduce requests to system
administrators to retrieve backup tapes for recovery operations.
Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the
path for the fast recovery area to enable on disk backups and rapid recovery of data.
When you enable fast recovery in the init.ora file, Oracle Database writes all RMAN
backups, archive logs, control file automatic backups, and database copies to the fast
recovery area. RMAN automatically manages files in the fast recovery area by deleting
obsolete backups and archiving files no longer required for recovery.
Oracle recommends that you create a fast recovery area disk group. Oracle
Clusterware files and Oracle Database files can be placed on the same disk group,
and you can also place fast recovery files in the same disk group. However, Oracle
recommends that you create a separate fast recovery disk group to reduce storage
device contention.
The fast recovery area is enabled by setting the DB_RECOVERY_FILE_DEST
parameter. The size of the fast recovery area is set with
DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast recovery
area, the more useful it becomes. For ease of use, Oracle recommends that you
create a fast recovery area disk group on storage devices that can contain at least
three days of recovery information. Ideally, the fast recovery area is large enough to
hold a copy of all of your data files and control files, the online redo logs, and the
archived redo log files needed to recover your database using the data file backups
kept under your retention policy.
Multiple databases can use the same fast recovery area. For example, assume you
have created a fast recovery area disk group on disks with 150 GB of storage, shared
by 3 different databases. You can set the size of the fast recovery for each database
10-5
Chapter 10
Recommended Postinstallation Tasks
$ cd /u01/app/18.0.0/grid/bin
$ ./asmca
10-6
Chapter 10
Recommended Postinstallation Tasks
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
See Also:
Oracle Clusterware Administration and Deployment Guide for more
information about system checks and configurations
The resource limits apply to all Oracle Clusterware processes and Oracle databases
managed by Oracle Clusterware. For example, to set a higher number of processes
limit, edit the file and set the CRS_LIMIT_NPROC parameter to a high value.
---
#Do not modify this file except as documented above or under the
#direction of Oracle Support Services.
#########################################################################
TZ=PST8PDT
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
CRS_LIMIT_STACK=2048
CRS_LIMIT_OPENFILE=65536
CRS_LIMIT_NPROC=65536
TNS_ADMIN=
10-7
Chapter 10
About Changes in Default SGA Permissions for Oracle Database
Related Topics
• Oracle Database Reference
10-8
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
Note:
If you are installing Oracle Database 11g release 2 with Oracle Grid
Infrastructure 18c, then before running Oracle Universal Installer (OUI) for
Oracle Database, run the following command on the local node only:
10-9
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
./asmca
Follow the steps in the configuration wizard to create Oracle ACFS storage for the
earlier release Oracle Database home.
3. Install Oracle Database 11g release 2 (11.2) software-only on the Oracle ACFS
file system you configured.
4. From the 11.2 Oracle Database home, run Oracle Database Configuration
Assistant (DBCA) and create the Oracle RAC Database, using Oracle ASM as
storage for the database data files.
./dbca
DBCA discovers the server pool that you created with the Oracle Grid
Infrastructure 18c srvctl command. Configure the server pool as required for your
services.
10-10
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
See Also:
Oracle Clusterware Administration and Deployment Guide for more
information about managing resources using policies
This setting updates the oratab file for Oracle ASM entries.
You can check the pinned nodes using the following command:
$ ./olsnodes -t -n
Note:
Restart Oracle ASM to load the updated oratab file.
See Also:
Oracle Automatic Storage Management Administrator's Guide for details
about configuring disk group compatibility for databases using Oracle
Database 11g or earlier software with Oracle Grid Infrastructure 18c
10-11
Chapter 10
Modifying Oracle Clusterware Binaries After Installation
Caution:
Before relinking executables, you must shut down all executables that run in
the Oracle home directory that you are relinking. In addition, shut down
applications linked with Oracle shared libraries.
# cd /u01/app/18.0.0/grid/crs/install
# ./rootcrs.sh -unlock
2. Change user to the Oracle Grid Infrastructure software owner, and relink binaries
using the command syntax make -f /u01/app/18.0.0/grid/rdbms/lib/
ins_rdbms.mk target, where target is the binaries that you want to relink. For
example, where you are updating the interconnect protocol from UDP to IPC, enter
the following command:
# su grid
$ make -f /u01/app/18.0.0/grid/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
Note:
To relink binaries, you can also change to the grid installation owner and
run the command /u01/app/18.0.0/grid/bin/relink.
10-12
Chapter 10
Modifying Oracle Clusterware Binaries After Installation
# ./rootcrs.sh -lock
# crsctl start crs
Note:
Do not delete directories in the Grid home. For example, do not delete the
directory Grid_home/OPatch. If you delete the directory, then the Grid
infrastructure installation owner cannot use OPatch to patch the Grid home,
and OPatch displays the error message "checkdir error: cannot create
Grid_home/OPatch".
10-13
11
Upgrading Oracle Grid Infrastructure
Oracle Grid Infrastructure upgrade consists of upgrade of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
Oracle Grid Infrastructure upgrades can be rolling upgrades, in which a subset of
nodes are brought down and upgraded while other nodes remain active. Starting with
Oracle ASM 11g Release 2 (11.2), Oracle ASM upgrades can be rolling upgrades.
You can also use Fleet Patching and Provisioning to upgrade Oracle Grid
Infrastructure for a cluster.
• Understanding Out-of-Place Upgrade
With an out-of-place upgrade, the installer installs the newer version in a separate
Oracle Clusterware home.
• About Oracle Grid Infrastructure Upgrade and Downgrade
You have the ability to upgrade or downgrade Oracle Grid Infrastructure to a
supported release.
• Options for Oracle Grid Infrastructure Upgrades
When you upgrade to Oracle Grid Infrastructure 18c, you upgrade to an Oracle
Flex Cluster configuration.
• Restrictions for Oracle Grid Infrastructure Upgrades
Review the following information for restrictions and changes for upgrades to
Oracle Grid Infrastructure installations, which consists of Oracle Clusterware and
Oracle Automatic Storage Management (Oracle ASM).
• Preparing to Upgrade an Existing Oracle Clusterware Installation
If you have an existing Oracle Clusterware installation, then you upgrade your
existing cluster by performing an out-of-place upgrade. You cannot perform an in-
place upgrade.
• Understanding Rolling Upgrades Using Batches
You can perform rolling upgrades of Oracle Grid Infrastructure in batches.
• Performing Rolling Upgrade of Oracle Grid Infrastructure
Review this information to perform rolling upgrade of Oracle Grid Infrastructure.
• About Upgrading Oracle Grid Infrastructure Using Rapid Home Provisioning
Rapid Home Provisioning is a software lifecycle management method for
provisioning and patching Oracle homes.
• Applying Patches to Oracle Grid Infrastructure
After you have upgraded Oracle Grid Infrastructure 18c, you can install individual
software patches by downloading them from My Oracle Support.
• Updating Oracle Enterprise Manager Cloud Control Target Parameters
After upgrading Oracle Grid Infrastructure, upgrade the Enterprise Manager Cloud
Control target.
11-1
Chapter 11
Understanding Out-of-Place Upgrade
11-2
Chapter 11
Options for Oracle Grid Infrastructure Upgrades
• Non-rolling Upgrade which involves bringing down all the nodes except one. A
complete cluster outage occurs while the root script stops the old Oracle
Clusterware stack and starts the new Oracle Clusterware stack on the node where
you initiate the upgrade. After upgrade is completed, the new Oracle Clusterware
is started on all the nodes.
Note that some services are disabled when one or more nodes are in the process of
being upgraded. All upgrades are out-of-place upgrades, meaning that the software
binaries are placed in a different Grid home from the Grid home used for the prior
release.
You can downgrade from Oracle Grid Infrastructure 18c to Oracle Grid Infrastructure
12c Release 2 (12.2), Oracle Grid Infrastructure 12c Release 1 (12.1), and Oracle Grid
Infrastructure 11g Release 2 (11.2). Be aware that if you downgrade to a prior release,
then your cluster must conform with the configuration requirements for that prior
release, and the features available for the cluster consist only of the features available
for that prior release of Oracle Clusterware and Oracle ASM.
You can perform out-of-place upgrades to an Oracle ASM instance using Oracle ASM
Configuration Assistant (ASMCA). In addition to running ASMCA using the graphical
user interface, you can run ASMCA in non-interactive (silent) mode.
Note:
You must complete an upgrade before attempting to use cluster backup files.
You cannot use backups for a cluster that has not completed upgrade.
See Also:
Oracle Database Upgrade Guide and Oracle Automatic Storage
Management Administrator's Guide for additional information about
upgrading existing Oracle ASM installations
11-3
Chapter 11
Restrictions for Oracle Grid Infrastructure Upgrades
Upgrade options from Oracle Grid Infrastructure 11g, Oracle Grid Infrastructure 12c
Release 1 (12.1), and Oracle Grid Infrastructure 12c Release 2 (12.2) to Oracle Grid
Infrastructure 18c include the following:
• Oracle Grid Infrastructure rolling upgrade which involves upgrading individual
nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster
• Oracle Grid Infrastructure non-rolling upgrade by bringing the cluster down and
upgrading the complete cluster
Note:
11-4
Chapter 11
Restrictions for Oracle Grid Infrastructure Upgrades
See Also:
Oracle Database Upgrade Guide for additional information about preparing
for upgrades
11-5
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Check Task
Review Upgrade Guide for Oracle Database Upgrade Guide
deprecation and desupport
information that may affect
upgrade planning.
11-6
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Table 11-1 (Cont.) Upgrade Checklist for Oracle Grid Infrastructure Installation
Check Task
Patch set (recommended) Install the latest patch set release for your existing installation. Review My Oracle
Support note 2180188.1 for the list of latest patches before upgrading Oracle Grid
Infrastructure.
Install user account Confirm that the installation owner you plan to use is the same as the installation
owner that owns the installation you want to upgrade.
Create a Grid home Create a new Oracle Grid Infrastructure Oracle home (Grid home) where you can
extract the image files. All Oracle Grid Infrastructure upgrades (upgrades of
existing Oracle Clusterware and Oracle ASM installations) are out-of-place
upgrades.
Instance names for Oracle Oracle Automatic Storage Management (Oracle ASM) instances must use
ASM standard Oracle ASM instance names.
The default ASM SID for a single-instance database is +ASM.
Cluster names and Site Cluster names must have the following characteristics:
names • At least one character but no more than 15 characters in length.
• Hyphens (-), and single-byte alphanumeric characters (a to z, A to Z, and 0 to
9).
• It cannot begin with a numeric character.
• It cannot begin or end with the hyphen (-) character.
Operating System Confirm that you are using a supported operating system, kernel release, and all
required operating system packages for the new Oracle Grid Infrastructure
installation.
Network addresses for For standard Oracle Grid Infrastructure installations, confirm the following network
standard Oracle Grid configuration:
Infrastructure • The private and public IP addresses are in unrelated, separate subnets. The
private subnet should be in a dedicated private subnet.
• The public and virtual IP addresses, including the SCAN addresses, are in the
same subnet (the range of addresses permitted by the subnet mask for the
subnet network).
• Neither private nor public IP addresses use a link local subnet (169.254.*.*).
OCR on raw or block devices Migrate OCR files from RAW or Block devices to Oracle ASM or a supported file
system. Direct use of RAW and Block devices is not supported.
Run the ocrcheck command to confirm Oracle Cluster Registry (OCR) file
integrity. If this check fails, then repair the OCR before proceeding.
Check space for GIMR When upgrading from Oracle Grid Infrastructure 12c Release 1 (12.1) or earlier
release, a new GIMR is created. Allocate additional storage space as described in
Oracle Clusterware Storage Space Requirements.
When upgrading from Oracle Grid Infrastructure 12c Release 2 (12.2), the GIMR is
preserved with its contents.
Oracle ASM password file When upgrading from Oracle Grid Infrastructure 12c Release 1 (12.1) or Oracle
Grid Infrastructure 12c Release 2 (12.2) to Oracle Grid Infrastructure 18c, move
the Oracle ASM password file from file system to Oracle ASM, before proceeding
with the upgrade.
When upgrading from Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle
Grid Infrastructure 18c, move the Oracle ASM password file from file system to
Oracle ASM, after the upgrade.
CVU Upgrade Validation Use Cluster Verification Utility (CVU) to assist you with system checks in
preparation for starting an upgrade.
11-7
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Table 11-1 (Cont.) Upgrade Checklist for Oracle Grid Infrastructure Installation
Check Task
Unset Environment variables As the user performing the upgrade, unset the environment
variables $ORACLE_HOME and $ORACLE_SID.
Check that the ORA_CRS_HOME environment variable is not set. Do not use
ORA_CRS_HOME as an environment variable, except under explicit direction
from Oracle Support.
Refer to the Checks to Complete Before Upgrading Oracle Grid Infrastructure for a
complete list of environment variables to unset.
RACcheck Upgrade Download and run the RACcheck Upgrade Readiness Assessment to obtain
Readiness Assessment automated upgrade-specific health check for upgrades to Oracle Grid
Infrastructure. See My Oracle Support note 1457357.1, which is available at the
following URL:
https://support.oracle.com/rs?type=doc&id=1457357.1
Back Up the Oracle Software Before you make any changes to the Oracle software, Oracle recommends that
Before Upgrades you create a backup of the Oracle software and databases.
HugePages memory Allocate memory to HugePages large enough for the System Global Areas (SGA)
allocation of all databases planned to run on the cluster, and to accommodate the System
Global Area for the Grid Infrastructure Management Repository.
Remove encryption of Oracle To avoid data corruption, ensure that encryption of Oracle ACFS file systems is
ACFS File Systems Before removed before upgrade.
Upgrade
Related Topics
• Checks to Complete Before Upgrading Oracle Grid Infrastructure
Complete the following tasks before upgrading Oracle Grid Infrastructure.
Related Topics
• Moving Oracle Clusterware Files from NFS to Oracle ASM
If Oracle Cluster Registry (OCR) and voting files are stored on Network File
System (NFS), then move these files to Oracle ASM disk groups before upgrading
Oracle Grid Infrastructure.
• Checks to Complete Before Upgrading Oracle Grid Infrastructure
Complete the following tasks before upgrading Oracle Grid Infrastructure.
• My Oracle Support Note 2180188.1
11-8
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
$ unset ORACLE_BASE
$ unset ORACLE_HOME
$ unset ORACLE_SID
For C shell:
$ unsetenv ORACLE_BASE
$ unsetenv ORACLE_HOME
$ unsetenv ORACLE_SID
11-9
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
1. As Oracle Grid Infrastructure installation owner (grid), create the Oracle ASM disk
group using ASMCA.
./asmca
Follow the steps in the ASMCA wizard to create the Oracle ASM disk group, for
example, DATA.
2. As grid user, move the voting files to the Oracle ASM disk group you created:
./ocrcheck
4. As root user, move the OCR files to the Oracle ASM disk group you created:
5. As root user, delete the Oracle Clusterware files from the NFS location:
11-10
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
check for upgrades to Oracle Grid Infrastructure 11.2.0.3, 11.2.0.4, 12.1.0.1, 12.1.0.2,
12.2, and 18c. You can run the Oracle ORAchk Upgrade Readiness Assessment tool
and automate many of the manual pre-upgrade and post-upgrade checks.
Oracle recommends that you download and run the latest version of Oracle ORAchk
from My Oracle Support. For information about downloading, configuring, and running
Oracle ORAchk, refer to My Oracle Support note 1457357.1.
Related Topics
• Oracle ORAchk and EXAchk User’s Guide
• https://support.oracle.com/rs?type=doc&id=1457357.1
11-11
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
The command uses the following syntax, where variable content is indicated by italics:
11-12
Chapter 11
Understanding Rolling Upgrades Using Batches
Related Topics
• Oracle Database Upgrade Guide
When you upgrade Oracle Grid Infrastructure without using root user automation, you
upgrade the entire cluster. You cannot select or de-select individual nodes for
upgrade. Oracle does not support attempting to add additional nodes to a cluster
during a rolling upgrade. Oracle recommends that you leave Oracle RAC instances
running when upgrading Oracle Clusterware. When you start the root script on each
node, the database instances on that node are shut down and then the
rootupgrade.sh script starts the instances again.
11-13
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
mkdir -p /u01/app/18.0.0/grid
chown grid:oinstall /u01/app/18.0.0/grid
cd /u01/app/18.0.0/grid
unzip -q download_location/grid_home.zip
Note:
• You must extract the image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During upgrade, the software is copied and installed
on all other nodes in the cluster.
2. Start the Oracle Grid Infrastructure wizard by running the following command:
/u01/app/18.0.0/grid/gridSetup.sh
11-14
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
Note:
Oracle Clusterware must always be the later release, so you cannot
upgrade Oracle ASM to a release that is more recent than Oracle
Clusterware.
11-15
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
Note:
Related Topics
• Oracle Clusterware Administration and Deployment Guide
To resolve this problem, run the rootupgrade.sh command with the -force flag using
the following syntax:
Grid_home/rootupgrade -force
For example:
# /u01/app/18.0.0/grid/rootupgrade -force
This command forces the upgrade to complete. Verify that the upgrade has completed
by using the command crsctl query crs activeversion. The active release should
be the upgrade release.
The force cluster upgrade has the following limitations:
11-16
Chapter 11
About Upgrading Oracle Grid Infrastructure Using Rapid Home Provisioning
$ cd /u01/app/18.0.0/grid/
3. Run the following command, where upgraded_node is one of the cluster nodes
that is upgraded successfully:
For upgrade:
11-17
Chapter 11
Applying Patches to Oracle Grid Infrastructure
The supported versions are 11.2, 12.1, 12.2, and 18c. You can also provision
applications and middleware using Rapid Home Provisioning. A single cluster, known
as the Rapid Home Provisioning Server, stores and manages standardized images,
called gold images, which can be provisioned to any number of nodes. You can install
Oracle Grid Infrastructure cluster configurations such as Oracle Standalone Clusters,
Oracle Member Clusters, and Oracle Member Cluster for Applications. After
deployment, you can expand and contract clusters and Oracle RAC Databases.
You can provision Oracle Grid Infrastructure on a remote set of nodes in a cloud
computing environment from a single cluster where you store templates of Oracle
homes as images (called gold images) of Oracle software, such as databases,
middleware, and applications.
See Also:
Oracle Clusterware Administration and Deployment Guide for information
about setting up the Rapid Home Provisioning Server and Client, creating
and using gold images for provisioning and patching Oracle Grid
Infrastructure, Oracle Database, and Oracle Restart homes.
11-18
Chapter 11
Applying Patches to Oracle Grid Infrastructure
See Also:
Oracle Clusterware Administration and Deployment Guide for more
information about patching Oracle Grid Infrastructure using Rapid Home
Provisioning.
11-19
Chapter 11
Updating Oracle Enterprise Manager Cloud Control Target Parameters
Oracle recommends that you select Recommended Patch Advisor, and enter
the product group, release, and platform for your software. My Oracle Support
provides you with a list of the most recent patch set updates (PSUs) and critical
patch updates (CPUs).
Place the patches in an accessible directory, such as /tmp.
2. Change directory to the /OPatch directory in the Grid home. For example:
$ cd /u01/app/18.0.0/grid/OPatch
3. Review the patch documentation for the patch you want to apply, and complete all
required steps before starting the patch upgrade.
4. Follow the instructions in the patch documentation to apply the patch. For
example:
11-20
Chapter 11
Updating Oracle Enterprise Manager Cloud Control Target Parameters
4. Click Cluster, then Target Setup, and then Monitoring Configuration from the
menu.
5. Update the value for Oracle Home with the new Grid home path.
6. Save the updates.
11-21
Chapter 11
Unlocking and Deinstalling the Previous Release Grid Home
7. In the Target Discovery: Results page, select the discovered Oracle ASM Listener
target, and click Configure.
8. In the Configure Listener dialog box, specify the listener properties and click OK.
9. Click Next and complete the discovery process.
The listener target is discovered in Oracle Enterprise Manager with the status as
Down.
10. From the Targets menu, select the type of target.
11. Click the target name to navigate to the target home page.
12. From the host, database, middleware target, or application menu displayed on the
target home page, select Target Setup, then select Monitoring Configuration.
13. In the Monitoring Configuration page for the listener, specify the host name in the
Machine Name field and the password for the ASMSNMP user in the Password
field.
14. Click OK.
For example:
2. After you change the permissions and ownership of the previous release Grid
home, log in as the Oracle Grid Infrastructure installation owner (grid, in the
preceding example), and use the deinstall command from previous release
Grid home (oldGH) $ORACLE_HOME/deinstall directory.
11-22
Chapter 11
Checking Cluster Health Monitor Repository Size After Upgrading
Caution:
You must use the deinstall command from the same release to remove
Oracle software. Do not run the deinstall command from a later release
to remove Oracle software from an earlier release. For example, do not run
the deinstall command from the 18.0.0.0.0 Oracle home to remove
Oracle software from an existing 12.2.0.1 Oracle home.
Note:
Your previous IPD/OS repository is deleted when you install Oracle Grid
Infrastructure.
By default, the CHM repository size is a minimum of either 1GB or 3600 seconds
(1 hour), regardless of the size of the cluster.
2. To enlarge the CHM repository, use the following command syntax, where
RETENTION_TIME is the size of CHM repository in number of seconds:
The value for RETENTION_TIME must be more than 3600 (one hour) and less
than 259200 (three days). If you enlarge the CHM repository size, then you must
ensure that there is local space available for the repository size you select on each
node of the cluster. If you do not have sufficient space available, then you can
move the repository to shared storage.
11-23
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
changes you performed during or after the Oracle Grid Infrastructure 18c upgrade are
removed and cannot be recovered.
To restore Oracle Clusterware to the previous release, use the downgrade procedure
for the release to which you want to downgrade.
Note:
• Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), you can
downgrade the cluster nodes in any sequence. You can downgrade all
cluster nodes except one, in parallel. You must downgrade the last node
after you downgrade all other nodes.
• When downgrading after a failed upgrade, if the rootcrs.sh or
rootcrs.bat file does not exist on a node, then instead of the executing
the script use the command perl rootcrs.pl . Use the Perl interpreter
located in the Oracle Home directory.
11-24
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Note:
When you downgrade Oracle Grid Infrastructure to an earlier release, for
example from Oracle Grid Infrastructure 18c to Oracle Grid Infrastructure 12c
Release 2 (12.2), the later release RAC databases already registered with
Oracle Grid Infrastructure will not start after the downgrade.
Related Topics
• My Oracle Support Note 2180188.1
11-25
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
2. As root user, use the command syntax rootcrs.sh -downgrade from 18c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 18c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/18.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
11-26
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
6. As root user, start the 12c Release 2 (12.2) Oracle Clusterware stack on all
nodes.
7. As grid user, from any Oracle Grid Infrastructure 12c Release 2 (12.2) node,
remove the MGMTDB resource as follows:
8. As grid user, run DBCA in the silent mode from the 12.2.0.1 Grid home and
create the Management Database container database (CDB) as follows:
Where:
SC is the type of the cluster as Oracle Standalone Cluster. The value for -
clusterType can be SC for Oracle Standalone Cluster, DSC for Oracle Domain
Services Cluster, or MC for Oracle Member Cluster.
/u01/app/18.0.0/grid is the Oracle home for Oracle Grid Infrastructure 18c.
12.2.0.1.0 is the version of Oracle Grid Infrastructure to which you are
downgrading.
/u01/app/grid2 is the Oracle base for Oracle Grid Infrastructure 18c
11-27
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
recent timzlrg file and timezone_number is the name of the most recent
timezone file:
$ cp $ORACLE_HOME/oracore/zoneinfo/timezlrg_number.dat /u01/app/
12.2.0/grid/oracore/zoneinfo/timezlrg_number.dat
$ cp $ORACLE_HOME/oracore/zoneinfo/timezone_number.dat /u01/app/
12.2.0/grid/oracore/zoneinfo/timezone_number.dat
b. Downgrade application schema using the following command syntax from 18c
Grid home:
$ cd $ORACLE_HOME/bin
$ ./srvctl disable mgmtdb
$ ./srvctl stop mgmtdb
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
$ ./sqlplus / as sysdba
SQL> startup downgrade
SQL> alter pluggable database all open downgrade;
SQL> exit
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl
-d
/u01/app/grid2 -e -l /u01/app/grid2/cfgtoollogs/mgmtua -b
mgmtdowngrade -r
$ORACLE_HOME/rdbms/admin/catdwgrd.sql
iv. Set ORACLE_HOME and ORACLE_SID environment variables for 12c Release
2 (12.2) Grid home:
$ export ORACLE_HOME=/u01/app/12.2.0/grid/
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
11-28
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup upgrade
SQL> alter pluggable database all open upgrade;
SQL> exit
vi. Run catrelod script for 12c Release 2 (12.2) Management Database
using the following command syntax, where /u01/app/grid is the Oracle
base for Oracle Grid Infrastructure 12c Release 2 (12.2):
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl
-d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/catrelod.sql
vii. Recompile all invalid objects after downgrade using the following
command syntax from 12c Release 2 (12.2) Grid home:
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl
-d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/utlrp.sql
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> exit
2. As root user, use the following command syntax rootcrs.sh -downgrade from
18c Grid home to downgrade Oracle Grid Infrastructure on all nodes, in any
sequence. For example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 18c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
11-29
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
$ cd $ORACLE_HOME/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/18.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
6. As grid user, set Oracle Grid Infrastructure 12c Release 2 (12.2) Grid home as
the active Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
7. As grid user, downgrade CHA models from any node where the Grid
Infrastructure stack is running from 12c Release 2 (12.2) Grid home and
Management Database and ochad are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that
you must pass as values.
Related Topics
• Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1)
11-30
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
1. As grid user, use the command syntax mgmtua downgrade from 18c Grid home to
downgrade Oracle Member Cluster where oldOracleHome is 12c Release 2 (12.2)
Grid home and version is the five digit release number:
2. As root user, use the command syntax rootcrs.sh -downgrade from 18c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 18c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/18.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/18.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
Note:
You must start Oracle Clusterware on last downgraded node first, and
then on other nodes.
6. As grid user, set Oracle Grid Infrastructure 12c Release 2 (12.2) Grid home as
the active Oracle Clusterware home:
11-31
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
7. As grid user, downgrade CHA models from any node where the Grid
Infrastructure stack is running from 12c Release 2 (12.2) Grid home and
Management Database and ochad are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that
you must pass as values.
Related Topics
• Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2)
Use this procedure to downgrade Oracle Domain Services Cluster to Oracle Grid
Infrastructure 12c Release 2 (12.2) after a successful upgrade.
• Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1)
Use this procedure to downgrade to Oracle Grid Infrastructure 12c Release 1
(12.1).
11-32
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Note:
You can downgrade the cluster nodes in any sequence.
Related Topics
• Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2)
# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. Downgrade the last node after you downgrade all other nodes:
# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade
4. Remove Oracle Grid Infrastructure 12c Release 2 (12.2) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/12.2.0/
grid is the location of the new (upgraded) Grid home:
cd /u01/app/12.2.0/grid/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
11-33
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
5. Set Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
$ cd /u01/app/12.1.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.1.0/grid
"CLUSTER_NODES=node1,node2,node3"
8. If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.2), run
the following commands to configure the Grid Infrastructure Management
Database:
a. Run DBCA in the silent mode from the 12.1.0.2 Oracle home and create the
Management Database container database (CDB) as follows:
b. Run DBCA in the silent mode from the 12.1.0.2 Oracle home and create the
Management Database pluggable database (PDB) as follows:
9. If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.1), run
DBCA in the silent mode from the 12.1.0.1 Oracle home and create the
Management Database as follows:
11-34
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
10. Configure the Management Database by running the Configuration Assistant from
the location 121_Grid_home/bin/mgmtca.
# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade
4. Follow these steps to remove Oracle Grid Infrastructure 12c Release 2 (12.2) Grid
home as the active Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/12.2.0/
grid is the location of the new (upgraded) Grid home:
$ cd /u01/app/12.2.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -
updateNodeList
-silent CRS=false ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3" -doNotUpdateNodeList
$ cd /u01/app/11.2.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -
11-35
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
updateNodeList
-silent CRS=true ORACLE_HOME=/u01/app/11.2.0/grid
2. From any node where the Grid Infrastructure stack from the earlier release is
running, unset the Oracle ASM rolling migration mode as follows:
a. Log in as grid user, and run the following command as SYSASM user on the
Oracle ASM instance:
3. If you are upgrading from 11.2.0.4 or 12.1.0.1, then apply the latest available
patches on all nodes in the cluster. If the pre-upgrade version is 12.1.0.2 or later,
then patch is not required.
a. On all other nodes except the first node, where the earlier release Grid
Infrastructure stack is running, apply the latest patch using the opatchauto
procedure.
b. On the first node where the earlier release Grid Infrastructure stack is stopped,
apply the latest patch using the opatch apply procedure.
For the list of latest available patches, see My Oracle Support at the following
link:
https://support.oracle.com/
11-36
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
rootcrs.pl -lock
c. From any other node where the Grid Infrastructure stack from the earlier
release is running, unset the Oracle ASM rolling migration mode as explained
in step 2.
4. On any node running Oracle Grid Infrastructure other than the first node, from the
Grid home of the earlier release, run the command:
Note:
You can downgrade the cluster nodes in any sequence.
Related Topics
• Downgrading Oracle Grid Infrastructure to 12c Release 2 (12.2) when Upgrade
Fails
11-37
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
Grid_home/deinstall/deinstall -local
2. As root user, from a node where Oracle Clusterware is installed, delete the failed
nodes using the delete node command:
Grid_home/gridSetup.sh
Alternatively, you can also add the nodes by running the addnode script:
Grid_home/addnode/addnode.sh
11-38
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
[root@node1]# cd /u01/app/18.0.0/grid
[root@node1]# ./rootupgrade.sh
[root@node2]# ./rootupgrade.sh
[root@node6]# cd /u01/app/18.0.0/grid
[root@node6]# ./rootupgrade.sh
11-39
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
1. If the root script failure indicated a need to reboot, through the message
CLSRSC-400, then reboot the first node (the node where the installation was
started). Otherwise, manually fix or clear the error condition, as reported in the
error output.
2. If necessary, log in as root to the first node. Run the orainstRoot.sh script on
that node again. For example:
$ sudo -s
[root@node1]# cd /u01/app/oraInventory
[root@node1]# ./orainstRoot.sh
3. Change directory to the Grid home on the first node, and run the root script on
that node again. For example:
[root@node1]# cd /u01/app/18.0.0/grid
[root@node1]# ./root.sh
4. Complete the installation on all other nodes.
5. Configure a response file, and provide passwords for the installation.
6. To complete the installation, log in as the Grid installation owner, and run
gridSetup.sh, located in the Oracle Grid Infrastructure home, specifying the
response file that you created. For example, where the response file is
gridinstall.rsp:
11-40
Chapter 11
Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure
3. Delete the default site after the associated nodes and storage are migrated.
For example:
11-41
12
Removing Oracle Database Software
These topics describe how to remove Oracle software and configuration files.
Use the deinstall command that is included in Oracle homes to remove Oracle
software. Oracle does not support the removal of individual products or components.
Caution:
If you have a standalone database on a node in a cluster, and if you have
multiple databases with the same global database name (GDN), then you
cannot use the deinstall command to remove one database only.
12-1
Chapter 12
About Oracle Deinstallation Options
• Oracle Database
• Oracle Grid Infrastructure, which includes Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM)
• Oracle Real Application Clusters (Oracle RAC)
• Oracle Database Client
The deinstall command is available in Oracle home directories after installation. It
is located in the $ORACLE_HOME/deinstall directory.
deinstall creates a response file by using information in the Oracle home and using
the information you provide. You can use a response file that you generated previously
by running the deinstall command using the -checkonly option. You can also
edit the response file template.
If you run deinstall to remove an Oracle Grid Infrastructure installation, then the
deinstaller prompts you to run the deinstall command as the root user. For Oracle
Grid Infrastructure for a cluster, the script is rootcrs.sh, and for Oracle Grid
Infrastructure for a standalone server (Oracle Restart), the script is roothas.sh.
Note:
• You must run the deinstall command from the same release to
remove Oracle software. Do not run the deinstall command from a
later release to remove Oracle software from an earlier release. For
example, do not run the deinstall command from the 18c Oracle
home to remove Oracle software from an existing 11.2.0.4 Oracle home.
• Starting with Oracle Database 12c Release 1 (12.1.0.2), the
roothas.sh script replaces the roothas.pl script in the Oracle Grid
Infrastructure home for Oracle Restart, and the rootcrs.sh script
replaces the rootcrs.pl script in the Grid home for Oracle Grid
Infrastructure for a cluster.
If the software in the Oracle home is not running (for example, after an unsuccessful
installation), then deinstall cannot determine the configuration, and you must
provide all the configuration details either interactively or in a response file.
In addition, before you run deinstall for Oracle Grid Infrastructure installations:
12-2
Chapter 12
Oracle Deinstallation (Deinstall)
• If Grid Naming Service (GNS) is in use, then notify your DNS administrator to
delete the subdomain entry from the DNS.
Caution:
deinstall deletes Oracle Database configuration files, user data, and fast
recovery area (FRA) files even if they are located outside of the Oracle base
directory path.
Purpose
deinstall stops Oracle software, and removes Oracle software and configuration
files on the operating system for a specific Oracle home.
Syntax
The deinstall command uses the following syntax:
12-3
Chapter 12
Oracle Deinstallation (Deinstall)
Parameters
Parameter Description
-silent Use this flag to run deinstall in
noninteractive mode. This option requires one
of the following:
• A working system that it can access to
determine the installation and
configuration information. The -silent
flag does not work with failed installations.
• A response file that contains the
configuration values for the Oracle home
that is being deinstalled or deconfigured.
You can generate a response file to use or
modify by running deinstall with the -
checkonly flag. deinstall then discovers
information from the Oracle home to deinstall
and deconfigure. It generates the response file
that you can then use with the -silent
option.
You can also modify the template file
deinstall.rsp.tmpl, located in
the $ORACLE_HOME/deinstall/
response directory.
-checkonly Use this flag to check the status of the Oracle
software home configuration. Running
deinstall with the -checkonly flag does
not remove the Oracle configuration. The -
checkonly flag generates a response file that
you can then use with the deinstall
command and -silent option.
-paramfile complete path of input Use this flag to run deinstall with a
response file response file in a location other than the
default. When you use this flag, provide the
complete path where the response file is
located.
The default location of the response file
is $ORACLE_HOME/deinstall/
response.
-params [name1=value name2=value Use this flag with a response file to override
name3=value . . .] one or more values to change in a response
file you have created.
-o complete path of directory for saving Use this flag to provide a path other than the
response files default location where the response file
(deinstall.rsp.tmpl) is saved.
The default location of the response file
is $ORACLE_HOME/deinstall/
response .
-tmpdir complete path of temporary Use this flag to specify a non-default location
directory to use where deinstall writes the temporary files
for the deinstallation.
12-4
Chapter 12
Deinstallation Examples for Oracle Database
Parameter Description
-logdir complete path of log directory to Use this flag to specify a non-default location
use where deinstall writes the log files for the
deinstallation.
-local Use this flag on a multinode environment to
deinstall Oracle software in a cluster.
When you run deinstall with this flag, it
deconfigures and deinstalls the Oracle
software on the local node (the node where
deinstall is run). On remote nodes, it
deconfigures Oracle software, but does not
deinstall the Oracle software.
-skipLocalHomeDeletion Use this flag in Oracle Grid Infrastructure
installations on a multinode environment to
deconfigure a local Grid home without deleting
the Grid home.
-skipRemoteHomeDeletion Use this flag in Oracle Grid Infrastructure
installations on a multinode environment to
deconfigure a remote Grid home without
deleting the Grid home.
-help Use this option to obtain additional information
about the command option flags.
$ ./deinstall
You can generate a deinstallation response file by running deinstall with the -
checkonly flag. Alternatively, you can use the response file template located
at $ORACLE_HOME/deinstall/response/deinstall.rsp.tmpl. If you have a
response file, then use the optional flag -paramfile to provide a path to the
response file.
In the following example, the deinstall command is in the path/u01/app/oracle/
product/18.0.0/dbhome_1/deinstall. It uses a response file called
my_db_paramfile.tmpl in the software owner location /home/usr/oracle:
$ cd /u01/app/oracle/product/18.0.0/dbhome_1/deinstall
$ ./deinstall -paramfile /home/usr/oracle/my_db_paramfile.tmpl
To remove the Oracle Grid Infrastructure home, use the deinstall command in the
Oracle Grid Infrastructure home.
12-5
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
$ cd /u01/app/18.0.0/grid/deinstall
$ ./deinstall -paramfile /home/usr/oracle/my_grid_paramfile.tmpl
12-6
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
ASM_DISKSTRING=/dev/rdsk/*,AFD:*
CDATA_QUORUM_GROUPS=
CRS_HOME=true
ODA_CONFIG=
JLIBDIR=/u01/app/jlib
CRFHOME="/u01/app/"
USER_IGNORED_PREREQ=true
MGMTDB_ORACLE_BASE=/u01/app/grid/
DROP_MGMTDB=true
RHP_CONF=false
OCRLOC=
GNS_TYPE=local
CRS_STORAGE_OPTION=1
CDATA_SITES=
GIMR_CONFIG=local
CDATA_BACKUP_SIZE=0
GPNPGCONFIGDIR=$ORACLE_HOME
MGMTDB_IN_HOME=true
CDATA_DISK_GROUP=+DATA2
LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
CDATA_BACKUP_FAILURE_GROUPS=
CRS_NODEVIPS='AUTO/255.255.254.0/net0,AUTO/255.255.254.0/net0'
ORACLE_OWNER=cuser
GNS_ALLOW_NET_LIST=
silent=true
INSTALL_NODE=node1.example.com
ORACLE_HOME_VERSION_VALID=true
inst_group=oinstall
LOGDIR=/tmp/deinstall2016-10-06_09-36-04AM/logs/
EXTENDED_CLUSTER_SITES=
CDATA_REDUNDANCY=EXTERNAL
CDATA_BACKUP_DISK_GROUP=+DATA2
APPLICATION_VIP=
HUB_NODE_LIST=node1,node2
NODE_NAME_LIST=node1,node2
GNS_DENY_ITF_LIST=
ORA_CRS_HOME=/u01/app/12.2.0/grid/
JREDIR=/u01/app/12.2.0/grid/jdk/jre/
ASM_LOCAL_SID=+ASM1
ORACLE_BASE=/u01/app/
GNS_CONF=true
CLUSTER_CLASS=DOMAINSERVICES
ORACLE_BINARY_OK=true
CDATA_BACKUP_REDUNDANCY=EXTERNAL
CDATA_FAILURE_GROUPS=
ASM_CONFIG=near
OCR_LOCATIONS=
ASM_ORACLE_BASE=/u01/app/12.2.0/
OLRLOC=
GIMR_CREDENTIALS=
GPNPCONFIGDIR=$ORACLE_HOME
ORA_ASM_GROUP=asmadmin
GNS_CREDENTIALS=
CDATA_BACKUP_AUSIZE=4
GNS_DENY_NET_LIST=
12-7
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
OLD_CRS_HOME=
NEW_NODE_NAME_LIST=
GNS_DOMAIN_LIST=node1.example.com
ASM_UPGRADE=false
NETCA_LISTENERS_REGISTERED_WITH_CRS=LISTENER
CDATA_BACKUP_DISKS=/dev/rdsk/
ASMCA_ARGS=
CLUSTER_GUID=
CLUSTER_NODES=node1,node2
MGMTDB_NODE=node2
ASM_DIAGNOSTIC_DEST=/u01/app/
NEW_PRIVATE_NAME_LIST=
AFD_LABELS_NO_DG=
AFD_CONFIGURED=true
CLSCFG_MISSCOUNT=
MGMT_DB=true
SCAN_PORT=1521
ASM_DROP_DISKGROUPS=true
OPC_NAT_ADDRESS=
CLUSTER_TYPE=DB
NETWORKS="net0"/IP_Address:public,"net1"/IP_Address:asm,"net1"/
IP_Address:cluster_interconnect
OCR_VOTINGDISK_IN_ASM=true
HUB_SIZE=32
CDATA_BACKUP_SITES=
CDATA_SIZE=0
REUSEDG=false
MGMTDB_DATAFILE=
ASM_IN_HOME=true
HOME_TYPE=CRS
MGMTDB_SID="-MGMTDB"
GNS_ADDR_LIST=mycluster-gns.example.com
CLUSTER_NAME=node1-cluster
AFD_CONF=true
MGMTDB_PWDFILE=
OPC_CLUSTER_TYPE=
VOTING_DISKS=
SILENT=false
VNDR_CLUSTER=false
TZ=localtime
GPNP_PA=
DC_HOME=/tmp/deinstall2016-10-06_09-36-04AM/logs/
CSS_LEASEDURATION=400
REMOTE_NODES=node2
ASM_SPFILE=
NEW_NODEVIPS='n1-vip/255.255.252.0/eth0,n2-vip/255.255.252.0/eth0'
SCAN_NAME=node1-cluster-scan.node1-cluster.com
RIM_NODE_LIST=
INVENTORY_LOCATION=/u01/app/oraInventory
12-8
Chapter 12
Migrating Standalone Oracle Grid Infrastructure Servers to a Cluster
Note:
Do not use quotation marks with variables except in the following cases:
• Around addresses in CRS_NODEVIPS:
CRS_NODEVIPS='n1-vip/255.255.252.0/eth0,n2-vip/255.255.252.0/eth0'
• Around interface names in NETWORKS:
NETWORKS="eth0"/192.0.2.1\:public,"eth1"/
10.0.0.1\:cluster_interconnect VIP1_IP=192.0.2.2
# cd /u01/app/18.0.0/grid/crs/install
12-9
Chapter 12
Migrating Standalone Oracle Grid Infrastructure Servers to a Cluster
b. Proceed to step 7.
Installing in a Different Location than Oracle Restart
a. Set up Oracle Grid Infrastructure software in the new Grid home software
location as described in Installing Only the Oracle Grid Infrastructure Software.
b. Proceed to step 7.
9. Set the environment variables as follows:
13. Mount the Oracle ASM disk group used by Oracle Restart.
16. Add the Oracle Database for support by Oracle Grid Infrastructure for a cluster,
using the configuration information you recorded in step 1. Use the following
command syntax, where db_unique_name is the unique name of the database on
the node, and nodename is the name of the node:
12-10
Chapter 12
Relinking Oracle Grid Infrastructure for a Cluster Binaries
c. Add each service to the database, using the command srvctl add service.
For example, add myservice as follows:
17. Add nodes to your cluster, as required, using the Oracle Grid Infrastructure
installer.
See Also:
Oracle Clusterware Administration and Deployment Guide for information
about adding nodes to your cluster.
Caution:
Before relinking executables, you must shut down all executables that run in
the Oracle home directory that you are relinking. In addition, shut down
applications linked with Oracle shared libraries. If present, unmount all
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
filesystems.
As root user:
# cd Grid_home/crs/install
# rootcrs.sh -unlock
12-11
Chapter 12
Changing the Oracle Grid Infrastructure Home Path
# cd Grid_home/rdbms/install/
# ./rootadd_rdbms.sh
# cd Grid_home/crs/install
# rootcrs.sh -lock
You must relink the Oracle Clusterware and Oracle ASM binaries every time you apply
an operating system patch or after you perform an operating system upgrade that
does not replace the root file system. For an operating system upgrade that results in
a new root file system, you must remove the node from the cluster and add it back into
the cluster.
For upgrades from previous releases, if you want to deinstall the prior release Grid
home, then you must first unlock the prior release Grid home. Unlock the previous
release Grid home by running the command rootcrs.sh -unlock from the previous
release home. After the script has completed, you can run the deinstall command.
Note:
Before changing the Grid home, you must shut down all executables that run
in the Grid home directory that you are relinking. In addition, shut down
applications linked with Oracle shared libraries.
$ cd /u01/app/18.0.0/grid/bin
$ ./crsctl stop crs
3. As grid user, detach the existing Grid home by running the following command,
where /u01/app/18.0.0/grid is the existing Grid home location:
12-12
Chapter 12
Unconfiguring Oracle Clusterware Without Removing Binaries
4. As root, move the Grid binaries from the old Grid home location to the new Grid
home location. For example, where the old Grid home is /u01/app/18.0.0/grid
and the new Grid home is /u01/app/18c:
# mkdir /u01/app/18c
# cp -pR /u01/app/18.0.0/grid /u01/app/18c
# cd /u01/app/18c/grid/crs/install
# ./rootcrs.sh -unlock -dstcrshome /u01/app/18c/grid
6. Clone the Oracle Grid Infrastructure installation, using the instructions provided in
Oracle Clusterware Administration and Deployment Guide.
When you navigate to the Grid_home/clone/bin directory and run the clone.pl
script, provide values for the input parameters that provide the path information for
the new Grid home.
The Oracle Clusterware and Oracle ASM binaries are relinked when you clone the
Oracle Grid Infrastructure installation.
7. As root again, enter the following command to start up in the new home location:
# cd /u01/app/18c/grid/crs/install
# ./rootcrs.sh -move -dstcrshome /u01/app/18c/grid
Note:
While cloning, ensure that you do not change the Oracle home base,
otherwise the move operation fails.
12-13
Chapter 12
Unconfiguring Oracle Member Cluster
Note:
Stop any databases, services, and listeners that may be installed and
running before deconfiguring Oracle Clusterware. In addition, dismount
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
and disable Oracle Automatic Storage Management Dynamic Volume
Manager (Oracle ADVM) volumes.
Caution:
Commands used in this section remove the Oracle Grid infrastructure
installation for the entire cluster. If you want to remove the installation from
an individual node, then see Oracle Clusterware Administration and
Deployment Guide.
3. Run rootcrs.sh with the -deconfig and -force flags. For example:
# ./rootcrs.sh -deconfig -force
The -lastnode flag completes deconfiguration of the cluster, including the OCR
and voting files.
Note:
Run the rootcrs.sh -deconfig -force -lastnode command on a Hub
Node. Deconfigure all Leaf Nodes before you run the command with the
-lastnode flag.
12-14
Chapter 12
Unconfiguring Oracle Member Cluster
Grid_home/deinstall/deinstall.sh
2. Complete the deinstallation by running the root script on all the nodes when
prompted.
# rootcrs.sh -deconfig
3. Delete the Member Cluster Manifest File for the Oracle Member Cluster and
stored on the Oracle Domain Services Cluster:
Related Topics
• Oracle Clusterware Administration and Deployment Guide
12-15
A
Installing and Configuring Oracle Database
Using Response Files
Review the following topics to install and configure Oracle products using response
files.
• How Response Files Work
Response files can assist you with installing an Oracle product multiple times on
multiple computers.
• Reasons for Using Silent Mode or Response File Mode
Review this section for use cases for running the installer in silent mode or
response file mode.
• Using Response Files
Review this information to use response files.
• Preparing Response Files
Review this information to prepare response files for use during silent mode or
response file mode installations.
• Running Oracle Universal Installer Using a Response File
After creating the response file, run Oracle Universal Installer at the command line,
specifying the response file you created, to perform the installation.
• Running Configuration Assistants Using Response Files
You can run configuration assistants in response file or silent mode to configure
and start Oracle software after it is installed on the system. To run configuration
assistants in response file or silent mode, you must copy and edit a response file
template.
• Postinstallation Configuration Using Response File Created During Installation
Use response files to configure Oracle software after installation. You can use the
same response file created during installation to also complete postinstallation
configuration.
• Postinstallation Configuration Using the ConfigToolAllCommands Script
You can create and run a response file configuration after installing Oracle
software. The configToolAllCommands script requires users to create a
second response file, of a different format than the one used for installing the
product.
A-1
Appendix A
Reasons for Using Silent Mode or Response File Mode
Typically, the installer runs in interactive mode, which means that it prompts you to
provide information in graphical user interface (GUI) screens. When you use response
files to provide this information, you run the installer from a command prompt using
either of the following modes:
• Silent mode
If you include responses for all of the prompts in the response file and specify the
-silent option when starting the installer, then it runs in silent mode. During a
silent mode installation, the installer does not display any screens. Instead, it
displays progress information in the terminal that you used to start it.
• Response file mode
If you include responses for some or all of the prompts in the response file and
omit the -silent option, then the installer runs in response file mode. During a
response file mode installation, the installer displays all the screens, screens for
which you specify information in the response file, and also screens for which you
did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for
the variables listed in the response file. For example, to specify the Oracle home
name, provide the Oracle home path for the ORACLE_HOME environment variable:
ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1
Mode Uses
Silent Use silent mode for the following installations:
• Complete an unattended installation, which you schedule using
operating system utilities such as at.
• Complete several similar installations on multiple systems without user
interaction.
• Install the software on a system that does not have X Window System
software installed on it.
The installer displays progress information on the terminal that you used to
start it, but it does not display any of the installer screens.
Response file Use response file mode to complete similar Oracle software installations on
more than one system, providing default answers to some, but not all of the
installer prompts.
A-2
Appendix A
Preparing Response Files
Note:
You must complete all required preinstallation tasks on a system before
running the installer in silent or response file mode.
Note:
If you copied the software to a hard disk, then the response files are located
in the /response directory.
All response file templates contain comment entries, sample formats, examples, and
other useful instructions. Read the response file instructions to understand how to
specify values for the response file variables, so that you can customize your
installation.
The following table lists the response files provided with this software:
A-3
Appendix A
Preparing Response Files
Table A-1 Response Files for Oracle Database and Oracle Grid Infrastructure
Caution:
When you modify a response file template and save a file for use, the
response file may contain plain text passwords. Ownership of the response
file should be given to the Oracle software installation owner only, and
permissions on the response file should be changed to 600. Oracle strongly
recommends that database administrators or other administrators delete or
secure response files when they are not in use.
$ cp $ORACLE_HOME/install/response/db_install.rsp local_directory
Note:
The installer or configuration assistant fails if you do not correctly
configure the response file. Also, ensure that your response file name
has the .rsp suffix.
4. Secure the response file by changing the permissions on the file to 600:
$ chmod 600 /local_dir/db_install.rsp
Ensure that only the Oracle software owner user can view or modify response files
or consider deleting them after the installation succeeds.
A-4
Appendix A
Preparing Response Files
Note:
A fully-specified response file for an Oracle Database installation
contains the passwords for database administrative accounts and for a
user who is a member of the OSDBA group (required for automated
backups).
Related Topics
• Oracle Universal Installer User's Guide
Note:
OUI does not save passwords while recording the response file.
A-5
Appendix A
Running Oracle Universal Installer Using a Response File
Click Cancel if you do not want to continue with the installation. The
installation stops, but the recorded response file is retained.
Note:
Ensure that your response file name has the .rsp suffix.
5. Before you use the saved response file on another system, edit the file and make
any required changes. Use the instructions in the file as a guide when editing it.
$ $ORACLE_HOME/runInstaller -help
$ /u01/app/18.0.0/grid/gridSetup.sh -help
Note:
You do not have to set the DISPLAY environment variable if you are
completing a silent mode installation.
4. To start the installer in silent or response file mode, enter a command similar to
the following:
• For Oracle Database:
$ $ORACLE_HOME/runInstaller [-silent] \
-responseFile responsefilename
A-6
Appendix A
Running Configuration Assistants Using Response Files
$ /u01/app/18.0.0/grid/gridSetup.sh [-silent] \
-responseFile responsefilename
Note:
Do not specify a relative path to the response file. If you specify a
relative path, then the installer fails.
In this example:
• -silent runs the installer in silent mode.
• responsefilename is the full path and file name of the installation response file
that you configured.
5. If this is the first time you are installing Oracle software on your system, then
Oracle Universal Installer prompts you to run the orainstRoot.sh script.
Log in as the root user and run the orainstRoot.sh script:
$ su root
password:
# /u01/app/oraInventory/orainstRoot.sh
Note:
You do not have to manually create the oraInst.loc file. Running the
orainstRoot.sh script is sufficient as it specifies the location of the
Oracle Inventory directory.
6. When the installation completes, log in as the root user and run the root.sh
script. For example:
$ su root
password:
# $ORACLE_HOME/root.sh
Note:
If you copied the software to a hard disk, then the response file template is
located in the /response directory.
A-7
Appendix A
Running Configuration Assistants Using Response Files
$ cp /directory_path/assistants/dbca/dbca.rsp local_directory
In this example, directory_path is the path of the directory where you have
copied the installation binaries.
As an alternative to editing the response file template, you can also create a
database by specifying all required information as command line options when you
run Oracle DBCA. For information about the list of options supported, enter the
following command:
$ $ORACLE_HOME/bin/dbca -help
$ vi /local_dir/dbca.rsp
Note:
Oracle DBCA fails if you do not correctly configure the response file.
4. Log in as the Oracle software owner user, and set the ORACLE_HOME environment
variable to specify the correct Oracle home directory.
5. To run Oracle DBCA in response file mode, set the DISPLAY environment variable.
A-8
Appendix A
Running Configuration Assistants Using Response Files
6. Use the following command syntax to run Oracle DBCA in silent or response file
mode using a response file:
In this example:
• -silent option indicates that Oracle DBCA runs in silent mode.
• local_dir is the full path of the directory where you copied the dbca.rsp
response file template.
During configuration, Oracle DBCA displays a window that contains the status
messages and a progress bar.
$ cp /directory_path/assistants/netca/netca.rsp local_directory
In this example, directory_path is the path of the directory where you have copied
the installation binaries.
2. Open the response file in a text editor:
$ vi /local_dir/netca.rsp
Note:
Net Configuration Assistant fails if you do not correctly configure the
response file.
4. Log in as the Oracle software owner user, and set the ORACLE_HOME environment
variable to specify the correct Oracle home directory.
5. Enter a command similar to the following to run Net Configuration Assistant in
silent mode:
A-9
Appendix A
Postinstallation Configuration Using Response File Created During Installation
In this command:
• The -silent option indicates to run Net Configuration Assistant in silent mode.
• local_dir is the full path of the directory where you copied the netca.rsp
response file template.
Oracle strongly recommends that you maintain security with a password response file:
• Permissions on the response file should be set to 600.
• The owner of the response file should be the installation owner user, with the
group set to the central inventory (oraInventory) group.
Example A-1 Response File Passwords for Oracle Grid Infrastructure (grid
user)
grid.install.crs.config.ipmi.bmcPassword=password
grid.install.asm.SYSASMPassword=password
grid.install.asm.monitorPassword=password
grid.install.config.emAdminPassword=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the
ipmi.bmcPassword input field blank.
If you do not want to enable Oracle Enterprise Manager for management, then leave
the emAdminPassword password field blank.
A-10
Appendix A
Postinstallation Configuration Using Response File Created During Installation
Example A-2 Response File Passwords for Oracle Grid Infrastructure for a
Standalone Server (oracle user)
oracle.install.asm.SYSASMPassword=password
oracle.install.asm.monitorPassword=password
oracle.install.config.emAdminPassword=password
If you do not want to enable Oracle Enterprise Manager for management, then leave
the emAdminPassword password field blank.
Example A-3 Response File Passwords for Oracle Database (oracle user)
This example illustrates the passwords to specify for use with the database
configuration assistants.
oracle.install.db.config.starterdb.password.SYS=password
oracle.install.db.config.starterdb.password.SYSTEM=password
oracle.install.db.config.starterdb.password.DBSNMP=password
oracle.install.db.config.starterdb.password.PDBADMIN=password
oracle.install.db.config.starterdb.emAdminPassword=password
oracle.install.db.config.asm.ASMSNMPPassword=password
1. Edit the response file and specify the required passwords for your configuration.
You can use the response file created during installation, located
at $ORACLE_HOME/install/response/product_timestamp.rsp. For example:
A-11
Appendix A
Postinstallation Configuration Using Response File Created During Installation
oracle.install.asm.SYSASMPassword=password
oracle.install.config.emAdminPassword=password
grid.install.asm.SYSASMPassword=password
grid.install.config.emAdminPassword=password
2. Change directory to the Oracle home containing the installation software. For
example:
For Oracle Grid Infrastructure:
cd Grid_home
cd $ORACLE_HOME
For Oracle Database, you can also run the response file located in the
directory $ORACLE_HOME/inventory/response/:
The postinstallation configuration tool runs the installer in the graphical user
interface mode, displaying the progress of the postinstallation configuration.
Specify the [-silent] option to run the postinstallation configuration in the
silent mode.
For example, for Oracle Grid Infrastructure:
A-12
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
The configToolAllCommands password response file has the following syntax options:
A-13
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
For example:
oracle.crs|S_ASMPASSWORD=PassWord
The database configuration assistants require the SYS, SYSTEM, and DBSNMP
passwords for use with Oracle DBCA. You may need to specify the following additional
passwords, depending on your system configuration:
• If the database is using Oracle Automatic Storage Management (Oracle ASM) for
storage, then you must specify a password for the S_ASMSNMPPASSWORD variable. If
you are not using Oracle ASM, then leave the value for this password variable
blank.
• If you create a multitenant container database (CDB) with one or more pluggable
databases (PDBs), then you must specify a password for the S_PDBADMINPASSWORD
variable. If you are not using Oracle ASM, then leave the value for this password
variable blank.
Oracle strongly recommends that you maintain security with a password response file:
• Permissions on the response file should be set to 600.
• The owner of the response file should be the installation owner user, with the
group set to the central inventory (oraInventory) group.
$ touch pwdrsp.properties
2. Open the file with a text editor, and cut and paste the sample password file
contents, as shown in the examples, modifying as needed.
3. Change permissions to secure the password response file. For example:
$ ls -al pwdrsp.properties
-rw------- 1 oracle oinstall 0 Apr 30 17:30 pwdrsp.properties
A-14
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
Example A-4 Password response file for Oracle Grid Infrastructure (grid user)
grid.crs|S_ASMPASSWORD=password
grid.crs|S_OMSPASSWORD=password
grid.crs|S_BMCPASSWORD=password
grid.crs|S_ASMMONITORPASSWORD=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the
S_BMCPASSWORD input field blank.
Example A-5 Password response file for Oracle Grid Infrastructure for a
Standalone Server (oracle user)
oracle.crs|S_ASMPASSWORD=password
oracle.crs|S_OMSPASSWORD=password
oracle.crs|S_ASMMONITORPASSWORD=password
Example A-6 Password response file for Oracle Database (oracle user)
This example provides a template for a password response file to use with the
database configuration assistants.
oracle.server|S_SYSPASSWORD=password
oracle.server|S_SYSTEMPASSWORD=password
oracle.server|S_EMADMINPASSWORD=password
oracle.server|S_DBSNMPPASSWORD=password
oracle.server|S_ASMSNMPPASSWORD=password
oracle.server|S_PDBADMINPASSWORD=password
If you do not want to enable Oracle Enterprise Manager for management, then leave
those password fields blank.
configToolAllCommands RESPONSE_FILE=/path/name.properties
For example:
$ ./configToolAllCommands RESPONSE_FILE=/home/oracle/pwdrsp.properties
A-15
B
Completing Preinstallation Tasks Manually
You can complete the preinstallation configuration tasks manually.
Oracle recommends that you use Oracle Universal Installer and Cluster Verification
Utility fixup scripts to complete minimal configuration settings. If you cannot use fixup
scripts, then complete minimum system settings manually.
• Configuring SSH Manually on All Cluster Nodes
Passwordless SSH configuration is a mandatory installation requirement. SSH is
used during installation to configure cluster member nodes, and SSH is used after
installation by configuration assistants, Oracle Enterprise Manager, Opatch, and
other features.
• Configuring Kernel Parameters on Oracle Solaris
These topics explain how to configure kernel parameters manually for Oracle
Solaris if you cannot complete them using the fixup scripts.
• Configuring Shell Limits for Oracle Solaris
For each installation software owner user account, check the shell limits for
installation.
$ pgrep sshd
B-1
Appendix B
Configuring SSH Manually on All Cluster Nodes
If SSH is running, then the response to this command is one or more process ID
numbers. In the home directory of the installation software owner (grid, oracle), use
the command ls -al to ensure that the .ssh directory is owned and writable only by
the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH
1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you
can use either RSA or DSA. The instructions that follow are for SSH1. If you have an
SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution
documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
$ id
$ id grid
Ensure that Oracle user group and user and the user terminal window process you
are using have group and user IDs are identical.
For example:
B-2
Appendix B
Configuring SSH Manually on All Cluster Nodes
3. If necessary, create the .ssh directory in the grid user's home directory, and set
permissions on it to ensure that only the oracle user has read and write
permissions:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
Note that the SSH configuration fails if the permissions are not set to 700.
4. Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
At the prompts, accept the default location for the key file (press Enter).
Never distribute the private key to anyone not authorized to perform Oracle
software installations.
This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the
private key to the ~/.ssh/id_dsa file.
5. Repeat steps 1 through 4 on each node that you intend to make a member of the
cluster, using the DSA key.
In the .ssh directory, you should see the id_dsa.pub keys that you have created,
and the file authorized_keys.
2. On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the
authorized_keys file to the oracle user .ssh directory on a remote node. The
following example is with SCP, on a node called node2, with the Oracle Grid
Infrastructure owner grid, where the grid user path is /home/grid:
a. You are prompted to accept a DSA key. Enter Yes, and you see that the node
you are copying to is added to the known_hosts file.
b. When prompted, provide the password for the grid user, which should be the
same on all nodes in the cluster. The authorized_keys file is copied to the
remote node.
B-3
Appendix B
Configuring SSH Manually on All Cluster Nodes
Your output should be similar to the following, where xxx represents parts of a
valid IP address:
3. Using SSH, log in to the node where you copied the authorized_keys file. Then
change to the .ssh directory, and using the cat command, add the DSA keys for
the second node to the authorized_keys file, clicking Enter when you are
prompted for a password, so that passwordless SSH is set up:
4. Repeat steps 2 and 3 from each node to each other member node in the cluster.
5. When you have added keys from each cluster node member to the
authorized_keys file on the last node you want to have as a cluster node
member, then use scp to copy the authorized_keys file with the keys from all
nodes back to each cluster node member, overwriting the existing version on the
other nodes. To confirm that you have all nodes in the authorized_keys file, enter
the command more authorized_keys, and determine if there is a DSA key for
each member node. The file lists the type of key (ssh-dsa), followed by the key,
and then followed by the user and server. For example:
The grid user's /.ssh/authorized_keys file on every node must contain the
contents from all of the /.ssh/id_dsa.pub files that you generated on all cluster
nodes.
B-4
Appendix B
Configuring Kernel Parameters on Oracle Solaris
cluster to run SSH from the local node to each node, including from the local node
to itself, and from each node to each other node:
At the end of this process, the public host name for each member node should be
registered in the known_hosts file for all other cluster nodes. If you are using a
remote client to connect to the local node, and you see a message similar to
"Warning: No xauth data; using fake authentication data for X11
forwarding," then this means that your authorized keys file is configured correctly,
but your SSH configuration has X11 forwarding enabled. To correct this issue, see
Setting Remote Display and X11 Forwarding Configuration.
3. Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh or scp commands
without being prompted for a password. For example:
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys file
on that node contains the correct public keys, and that you have created an Oracle
software owner with identical group membership and IDs.
B-5
Appendix B
Configuring Kernel Parameters on Oracle Solaris
B-6
Appendix B
Configuring Kernel Parameters on Oracle Solaris
project.max-shm-memory
Related Topics
• Administering Oracle Solaris 11
B-7
Appendix B
Configuring Kernel Parameters on Oracle Solaris
2. To change the current values use the prctl command. For example:
• To modify the value of max-shm-memory to 6 GB:
Note:
When you use the prctl command (Resource Control) to change
system parameters, you do not have to restart the system for these
parameter changes to take effect. However, the changed parameters do
not persist after a system restart.
B-8
Appendix B
Configuring Kernel Parameters on Oracle Solaris
1. By default, Oracle instances are run as the oracle user of the dba group. A project
with the name group.dba is created to serve as the default project for the oracle
user. Run the id command to verify the default project for the oracle user:
# su - oracle
$ id -p
uid=100(oracle) gid=100(dba) projid=100(group.dba)
$ exit
2. To set the maximum shared memory size to 2 GB, run the projmod command:
# cat /etc/project
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
group.dba:100:Oracle default project ::: project.max-shm-
memory=(privileged,2147483648,deny)
4. To verify that the resource control is active, check process ownership, and run the
commands id and prctl:
# su - oracle
$ id -p
uid=100(oracle) gid=100(dba) projid=100(group.dba)
$ prctl -n project.max-shm-memory -i process $$
process: 5754: -bash
NAME PRIVILEGE VALUE FLAG ACTION
RECIPIENT
project.max-shm-memory privileged 2.00GB - deny
Note:
The value for the maximum shared memory depends on the SGA
requirements and should be set to a value greater than the SGA size.
Related Topics
• Oracle Solaris Tunable Parameters Reference Manual
B-9
Appendix B
Configuring Kernel Parameters on Oracle Solaris
32768
65535
On Oracle Solaris 11, use the ipadm command to check your current range for
ephemeral ports:
In the preceding examples, the ephemeral ports are set to the default range
(32768-65535).
If necessary for your anticipated workload or number of servers , update the UDP and
TCP ephemeral port range to a broader range. For example:
On Oracle Solaris 10:
B-10
Appendix B
Configuring Shell Limits for Oracle Solaris
Oracle recommends that you make these settings permanent. Refer to your system
administration documentation for information about how to automate this ephemeral
port range alteration on system restarts.
Note:
The shell limit values in this section are minimum values only. For production
database systems, Oracle recommends that you tune these values to
optimize the performance of the system. See your operating system
documentation for more information about configuring shell limits.
The ulimit settings determine process memory related resource limits. Verify that
the following shell limits are set to the values shown:
ulimit -s
ulimit -n
B-11
C
Deploying Oracle RAC on Oracle Solaris
Cluster Zone Clusters
Oracle Solaris Cluster provides the capability to create high-availability zone clusters.
Installing Oracle Real Application Clusters (Oracle RAC) in a zone cluster allows you
to have separate database versions or separate deployments of the same database
(for example, one for production and one for development).
This appendix lists use cases for Oracle RAC deployment in Oracle Solaris Cluster
zone clusters and also provides links to documentation resources for the deployment
tasks.
• About Oracle RAC Deployment in Oracle Solaris Cluster Zone Clusters
A zone cluster consists of several Oracle Solaris Zones, each of which resides on
its own separate server; the zones that comprise the cluster are linked together
into a single virtual cluster.
• Prerequisites for Oracle RAC Deployment in Oracle Solaris Cluster Zone Clusters
Review the prerequisites for deploying Oracle Real Application Clusters (Oracle
RAC) in Oracle Solaris Cluster zone clusters.
• Deploying Oracle RAC in the Global Zone
This deployment scenario consists of multiple servers, on which you install Oracle
Real Application Clusters (Oracle RAC) in the global zone.
• Deploying Oracle RAC in a Zone Cluster
In this deployment scenario, you can install Oracle Real Application Clusters
(Oracle RAC) in a zone cluster. A zone cluster is a cluster of Oracle Solaris non-
global zones.
C-1
Appendix C
Prerequisites for Oracle RAC Deployment in Oracle Solaris Cluster Zone Clusters
tiers and administrative domains from each other, while taking advantage of the
simplified administration provided by Oracle Solaris Cluster.
See Also:
C-2
Appendix C
Deploying Oracle RAC in the Global Zone
See Also:
C-3
Appendix C
Deploying Oracle RAC in a Zone Cluster
# clresource status
3. Prepare the environment and then install and configure Oracle Grid Infrastructure
and Oracle Database.
See Configuring Users, Groups and Environments for Oracle Grid Infrastructure
and Oracle Database for information about configuring users, groups, and
environments.
See Installing Oracle Grid Infrastructure for information about installing Oracle Grid
Infrastructure.
See Oracle Real Application Clusters Installation Guide for information about
installing Oracle RAC databases.
4. Create Oracle Solaris Cluster resources, link them, and bring them online as
described in Configuring Resources for Oracle RAC Database Instances.
C-4
D
Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration guidelines
created to ensure well-organized Oracle installations, which simplifies administration,
support and maintenance.
• About the Optimal Flexible Architecture Standard
Oracle Optimal Flexible Architecture (OFA) rules help you to organize database
software and configure databases to allow multiple databases, of different
versions, owned by different users to coexist.
• About Multiple Oracle Homes Support
Oracle Database supports multiple Oracle homes. You can install this release or
earlier releases of the software more than once on the same system, in different
Oracle home directories.
• About the Oracle Inventory Directory and Installation
The directory that you designate as the Oracle Inventory directory (oraInventory)
stores an inventory of all software installed on the system.
• Oracle Base Directory Naming Convention
The Oracle Base directory is the database home directory for Oracle Database
installation owners, and the log file location for Oracle Grid Infrastructure owners.
• Oracle Home Directory Naming Convention
By default, Oracle Universal Installer configures Oracle home directories using
these Oracle Optimal Flexible Architecture conventions.
• Optimal Flexible Architecture File Path Examples
Review examples of hierarchical file mappings of an Optimal Flexible Architecture-
compliant installation.
D-1
Appendix D
About Multiple Oracle Homes Support
Note:
OFA assists in identification of an ORACLE_BASE with its Automatic
Diagnostic Repository (ADR) diagnostic data to properly collect incidents.
D-2
Appendix D
About the Oracle Inventory Directory and Installation
D-3
Appendix D
Oracle Base Directory Naming Convention
If you have neither set ORACLE_BASE, nor created an OFA-compliant path, then the
Oracle Inventory directory is placed in the home directory of the user that is performing
the installation, and the Oracle software is installed in the path /app/owner, where
owner is the Oracle software installation owner. For example:
/home/oracle/oraInventory
/home/oracle/app/oracle/product/18.0.0/dbhome_1
Example Description
Oracle Database Oracle base, where the Oracle Database software
/u01/app/ installation owner name is oracle. The Oracle Database binary home is
oracle located underneath the Oracle base path.
Oracle Grid Infrastructure Oracle base, where the Oracle Grid Infrastructure
/u01/app/grid software installation owner name is grid.
Caution:
The Oracle Grid Infrastructure Oracle base
should not contain the Oracle Grid
Infrastructure binaries for an Oracle Grid
Infrastructure for a cluster installation.
Permissions for the file path to the Oracle Grid
Infrastructure binary home is changed to root
during installation.
D-4
Appendix D
Oracle Home Directory Naming Convention
Variable Description
pm A mount point name.
s A standard directory name.
u The name of the owner of the directory.
v The version of the software.
type The type of installation. For example: Database (dbhome), Client (client),
or Oracle Grid Infrastructure (grid)
n An optional counter, which enables you to install the same product more
than once in the same Oracle base directory. For example: Database 1 and
Database 2 (dbhome_1, dbhome_2)
For example, the following path is typical for the first installation of Oracle Database on
this system:
/u01/app/oracle/product/18.0.0/dbhome_1
Note:
• The Grid homes are examples of Grid homes used for an Oracle Grid
Infrastructure for a standalone server deployment (Oracle Restart), or a
Grid home used for an Oracle Grid Infrastructure for a cluster
deployment (Oracle Clusterware). You can have either an Oracle Restart
deployment, or an Oracle Clusterware deployment. You cannot have
both options deployed at the same time.
• Oracle Automatic Storage Management (Oracle ASM) is included as part
of an Oracle Grid Infrastructure installation. Oracle recommends that you
use Oracle ASM to provide greater redundancy and throughput.
D-5
Appendix D
Optimal Flexible Architecture File Path Examples
Directory Description
Root directory
/
Oracle base directory for user oracle. There can be many Oracle
/u01/app/oracle/ Database installations on a server, and many Oracle Database
software installation owners.
Oracle software homes that an Oracle installation owner owns should
be located in the Oracle base directory for the Oracle software
installation owner, unless that Oracle software is Oracle Grid
Infrastructure deployed for a cluster.
Oracle base directory for user grid. The Oracle home (Grid home) for
/u01/app/grid Oracle Grid Infrastructure for a cluster installation is located outside of
the Grid user. There can be only one Grid home on a server, and only
one Grid software installation owner.
The Grid home contains log files and other administrative files.
Subtree for database administration files
/u01/app/oracle/
admin/
D-6
Appendix D
Optimal Flexible Architecture File Path Examples
Table D-2 (Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory Description
Subtree for recovery files
/u01/app/oracle/
fast_recovery_are
a/
Common path for Oracle software products other than Oracle Grid
/u01/app/oracle/ Infrastructure for a cluster
product/
D-7
Appendix D
Optimal Flexible Architecture File Path Examples
Table D-2 (Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory Description
Oracle home directory for Oracle Grid Infrastructure for a standalone
/u01/app/oracle/ server, owned by Oracle Database and Oracle Grid Infrastructure
product/18.0.0/ installation owner oracle.
grid
Oracle home directory for Oracle Grid Infrastructure for a cluster (Grid
/u01/app/18.0.0/ home), owned by user grid before installation, and owned by root
grid after installation.
D-8
Index
A See also OINSTALL directory
checkdir error, 10-12, 11-4
adding Oracle ASM listener, 11-21 checklists, 1-1
ASM_DISKSTRING, 8-12 client-server configurations, D-2
asmadmin groups cluster configuration
creating, 6-12 Oracle Domain Services Cluster, 9-4
ASMCA Oracle Extended Clusters, 9-7
Used to create disk groups for older Oracle Oracle Member Clusters, 9-5
Database releases on Oracle ASM, Oracle Standalone Clusters, 9-4
10-11 cluster file system
asmdba groups storage option for data files, 7-5
creating, 6-12 cluster name, 1-4
asmoper group requirements for, 1-4
creating, 6-12 cluster nodes
ASMSNMP, 1-4 private network node interfaces, 1-4
Automatic Diagnostic Repository (ADR), D-1 public network node names and addresses,
Automatic Storage Management Cluster File 1-4
System virtual node names, 1-4, 5-4
See Oracle ACFS. Cluster Time Synchronization Service, 4-19
CLUSTER_INTERCONNECTS parameter, 5-3
clusterware
B requirements for third party clusterware, 1-4
backupdba group commands
creating, 6-13 /usr/sbin/swap, 2-2
Bash shell asmca, 8-24
default user startup file, 6-23 asmcmd, 8-10
bash_profile file, 6-23 crsctl, 11-4
batch upgrade, 11-13 df -h, 2-2
binaries df -k, 2-2
relinking, 10-12 grep "Memory size", 2-2
binary files gridSetup.sh, 9-30
supported storage options for, 7-1 ipadm, 8-18
BMC ndd, 8-18
configuring, 6-30 nscd, 4-18
BMC interface root.sh, 10-4
preinstallation tasks, 6-29 rootcrs.pl
Bourne shell and deconfig option, 12-13
default user startup file, 6-23 rootcrs.sh, 10-12
rootupgrade.sh, 11-4
runcluvfy.sh, 9-30
C srvctl, 11-4
C shell umask, 6-23
default user startup file, 6-23 unset, 11-8
central inventory, 1-7, D-5 useradd, 6-15
See also Oracle inventory directory cron jobs, 1-11
Index-1
Index
Index-2
Index
F Grid user
creating, 6-15
failed install, 11-38 gridSetup script, 9-8, 9-15, 9-22
failed upgrade, 11-38 groups
failure group creating an Oracle Inventory Group, 6-3
characteristics of Oracle ASM failure group, creating the asmadmin group, 6-12
8-2, 8-15 creating the asmdba group, 6-12
examples of Oracle ASM failure groups, 8-2 creating the asmoper group, 6-12
Oracle ASM, 8-2 creating the backupdba group, 6-13
fast recovery area, 10-6 creating the dba group, 6-13
filepath, D-5 creating the dgdba group, 6-14
Grid home creating the kmdba group, 6-14
filepath, D-5 creating the racdba group, 6-14
fencing OINSTALL group, 1-3
and IPMI, 6-29 OSBACKUPDBA (backupdba), 6-9
file mode creation mask OSDBA (dba), 6-9
setting, 6-23 OSDBA group (dba), 6-9
file system OSDGDBA (dgdba), 6-9
storage option for data files, 7-5 OSKMDBA (kmdba), 6-9
files OSOPER (oper), 6-9
bash_profile, 6-23 OSOPER group (oper), 6-9
dbca.rsp, A-3
editing shell startup file, 6-23
enterprise.rsp, A-3
H
login, 6-23 hardware requirements
profile, 6-23 display, 1-1
response files, A-3 IPMI, 1-1
filesets, 4-5 local storage for Oracle homes, 1-1
network, 1-1
G RAM, 1-1
tmp, 1-1
GIMR, 8-9 hared Memory Resource Controls
global zones checking, B-7
deploying Oracle RAC, C-3 highly available IP addresses (HAIP), 5-6, 5-7
globalization, 1-11 host names
GNS legal host names, 1-4
about, 5-11 Hub Nodes, 5-18, 5-19
configuration example, 5-22 hugepages, 1-3
configuring, 5-10
GNS client clusters
and GNS client data file, 5-13
I
GNS client data file required for installation, image
5-12 install, 9-2
name resolution for, 5-12 image-based installation of Oracle Grid
GNS client data file Infrastructure, 9-8, 9-15, 9-22
how to create, 5-13 inaccessible nodes
GNS virtual IP address, 1-4 upgrading, 11-17
grid home incomplete installations, 11-40
unlocking, 10-12 init.ora
grid infrastructure management repository, 9-4 and SGA permissions, 10-8
Grid Infrastructure Management Repository, 8-9 installation
about, 7-7 cloning a Grid infrastructure installation to
global, 8-9 other nodes, 9-31
local, 8-9 response files, A-3
preparing, A-3, A-5
3
Index
installation (continued) M
response files (continued)
templates, A-3 Management Database, 8-9
silent mode, A-6 management repository service, 9-4
installation planning, 1-1 manifest file, 8-23
installation types mask
and Oracle ASM, 8-2 setting default file mode creation mask, 6-23
installer screens max_buf, 8-18
ASM Storage Option, 8-14 mixed binaries, 4-5
Cluster Node Information, 5-22 mode
Grid Plug and Play Information, 5-11, 5-21, setting default file mode creation mask, 6-23
5-22 Multiple Oracle Homes Support
Network Interface Usage, 5-19 advantages, D-2
Node Selection screen, 11-14 multiversioning, D-2
installing Oracle Member Cluster, 9-22
interconnect, 1-4
interconnects
N
single interface, 5-7 Name Service Cache Daemon
interfaces, 1-4 enabling, 4-18
requirements for private interconnect, 5-3 Net Configuration Assistant (NetCA)
IPMI response files, A-9
addresses not configurable by GNS, 6-30 running at command prompt, A-9
preinstallation tasks, 6-29 netca.rsp file, A-3
IPv4 requirements, 5-2, 5-8 network interface cards
IPv6 requirements, 5-2, 5-8 requirements, 5-6
network requirements, 1-4
J network, minimum requirements, 1-1
networks
JDK requirements, 4-5 configuring interfaces, 5-25
for Oracle Flex Clusters, 5-18, 5-19
hardware minimum requirements, 5-6
K IP protocol requirements for, 5-2, 5-8
kernel parameters manual address configuration example, 5-24
changing, B-8 Oracle Flex ASM, 1-4
checking, B-7 required protocols, 5-6
displaying, B-8 NFS
tcp and udp, B-10 and data files, 7-8
kernel parameters configuration, B-5 and Oracle Clusterware files, 7-7
kmdba group buffer size requirements, 8-18
creating, 6-14 for data files, 7-8
Korn shell NFS mounts
default user startup file, 6-23 Direct NFS Client
requirements, 7-8
mtab, 7-8
L oranfstab, 7-8
Leaf Nodes, 5-18, 5-19 noninteractive mode
See response file mode
legal host names, 1-4
licensing, 1-11
login file, 6-23 O
LVM
recommendations for Oracle ASM, 8-2 OCR
See Oracle Cluster Registry
OFA, D-1
See also Optimal Flexible Architecture
oifcfg, 5-3
Index-4
Index
5
Index
Index-6
Index
7
Index
Index-8