VCS
VCS
2
Administrator's Guide - AIX
December 2016
Last updated: 2016-12-04
Legal Notice
Copyright © 2016 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, Veritas InfoScale, and NetBackup are trademarks or registered
trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
This product may contain third party software for which Veritas is required to provide attribution
to the third party (“Third Party Programs”). Some of the Third Party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the third party legal notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The document version appears on page 2 of each
guide. The latest documentation is available on the Veritas website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
IP Address
Application
Storage Storage
Start procedure The application must have a command to start it and all resources it
may require. VCS brings up the required resources in a specific order,
then brings up the application by using the defined start procedure.
For example, to start an Oracle database, VCS must know which Oracle
utility to call, such as sqlplus. VCS must also know the Oracle user,
instance ID, Oracle home directory, and the pfile.
For example, You cannot kill all httpd processes on a Web server
because it also stops other Web servers.
If VCS cannot stop an application cleanly, it may call for a more forceful
method, like a kill signal. After a forced stop, a clean-up procedure may
be required for various process-specific and application-specific items
that may be left behind. These items include shared memory segments
or semaphores.
Introducing Cluster Server 27
About cluster control guidelines
Monitor procedure The application must have a monitor procedure that determines if the
specified application instance is healthy. The application must allow
individual monitoring of unique instances.
For example, the monitor procedure for a Web server connects to the
specified server and verifies that it serves Web pages. In a database
environment, the monitoring application can connect to the database
server and perform SQL commands to verify read and write access to
the database.
following options: linking /usr/local to a file system that is mounted from the shared
storage device or mounting file system from the shared device on /usr/local.
The application must also store data to disk instead of maintaining it in memory.
The takeover system must be capable of accessing all required information. This
requirement precludes the use of anything inside a single system inaccessible by
the peer. NVRAM accelerator boards and other disk caching mechanisms for
performance are acceptable, but must be done on the external array and not on
the local host.
About networking
Networking in the cluster is used for the following purposes:
■ Communications between the cluster nodes and the customer systems.
■ Communications between the cluster nodes.
See “About cluster control, communications, and membership” on page 41.
Database IP Address
File Network
Disk Group
Resource dependencies determine the order in which resources are brought online
or taken offline. For example, you must import a disk group before volumes in the
disk group start, and volumes must start before you mount file systems. Conversely,
you must unmount file systems before volumes stop, and volumes must stop before
you deport disk groups.
A parent is brought online after each child is brought online, and this continues up
the tree, until finally the application starts. Conversely, to take a managed application
Introducing Cluster Server 31
Logical components of VCS
offline, VCS stops resources by beginning at the top of the hierarchy. In this example,
the application stops first, followed by the database application. Next the IP address
and file systems stop concurrently. These resources do not have any resource
dependency between them, and this continues down the tree.
Child resources must be brought online before parent resources are brought online.
Parent resources must be taken offline before child resources are taken offline. If
resources do not have parent-child interdependencies, they can be brought online
or taken offline concurrently.
Minimum
Application (res1) dependency = 2 IPs
IP (res6)
IP (res2)
If two or more IP resources come online, the application attempts to come online.
If the number of online resources falls below the minimum requirement, (in this
case: 2), resource fault is propagated up the resource dependency tree.
Note: Veritas InfoScale Operations Manager and Java GUI does not support atleast
resource dependency and so the dependency is shown as normal resource
dependency.
Categories of resources
Different types of resources require different levels of control.
Table 1-1 describes the three categories of VCS resources.
On-Only VCS starts On-Only resources, but does not stop them.
Application
A single node can host any number of service groups, each providing a discrete
service to networked clients. If the server crashes, all service groups on that node
must be failed over elsewhere.
Service groups can be dependent on each other. For example, a managed
application might be a finance application that is dependent on a database
application. Because the managed application consists of all components that are
required to provide the service, service group dependencies create more complex
managed applications. When you use service group dependencies, the managed
application is the entire dependency tree.
See “About service group dependencies” on page 381.
If you do not use the product installer to install VCS, you must run the uuidconfig.pl
utility to configure the UUID for the cluster.
See “Configuring and unconfiguring the cluster UUID value” on page 155.
■ During initial node startup, to probe and determine the status of all
resources on the system.
■ After every online and offline operation.
■ Periodically, to verify that the resource remains in its correct state.
Under normal circumstances, the monitor entry point is run every
60 seconds when a resource is online. The entry point is run every
300 seconds when a resource is expected to be offline.
■ When you probe a resource using the following command:
# hares -probe res_name -sys system_name.
imf_init Initializes the agent to interface with the IMF notification module. This
function runs when the agent starts up.
imf_getnotification Gets notification about resource state changes. This function runs after
the agent initializes with the IMF notification module. This function
continuously waits for notification and takes action on the resource
upon notification.
Action Performs actions that can be completed in a short time and which are
outside the scope of traditional activities such as online and offline.
Some agents have predefined action scripts that you can run by invoking
the action function.
To see the updated information, you can invoke the info agent function
explicitly from the command line interface by running the following
command:
agent framework. You can enable or disable the intelligent monitoring functionality
of the VCS agents that are IMF-aware. For a list of IMF-aware agents, see the
Cluster Server Bundled Agents Reference Guide.
See “How intelligent resource monitoring works” on page 39.
See “Enabling and disabling intelligent resource monitoring for agents manually”
on page 140.
See “Enabling and disabling IMF for agents by using script” on page 142.
Poll-based monitoring can consume a fairly large percentage of system resources
such as CPU and memory on systems with a huge number of resources. This not
only affects the performance of running applications, but also places a limit on how
many resources an agent can monitor efficiently.
However, with IMF-based monitoring you can either eliminate poll-based monitoring
completely or reduce its frequency. For example, for process offline and online
monitoring, you can completely avoid the need for poll-based monitoring with
IMF-based monitoring enabled for processes. Similarly for vxfs mounts, you can
eliminate the poll-based monitoring with IMF monitoring enabled. Such reduction
in monitor footprint will make more system resources available for other applications
to consume.
Note: Intelligent Monitoring Framework for mounts is supported only for the VxFS,
CFS, and NFS mount types.
With IMF-enabled agents, VCS will be able to effectively monitor larger number of
resources.
Thus, intelligent monitoring has the following benefits over poll-based monitoring:
■ Provides faster notification of resource state changes
■ Reduces VCS system utilization due to reduced monitor function footprint
■ Enables VCS to effectively monitor a large number of resources
Consider enabling IMF for an agent in the following cases:
■ You have a large number of process resources or mount resources under VCS
control.
■ You have any of the agents that are IMF-aware.
For information about IMF-aware agents, see the following documentation:
■ See the Cluster Server Bundled Agents Reference Guide for details on whether
your bundled agent is IMF-aware.
Introducing Cluster Server 39
Logical components of VCS
■ See the Storage Foundation Cluster File System High Availability Installation
Guide for IMF-aware agents in CFS environments.
The architecture uses an IMF daemon (IMFD) that collects notifications from the
user space notification providers (USNPs) and passes the notifications to the AMF
driver, which in turn passes these on to the appropriate agent. IMFD starts on the
first registration with IMF by an agent that requires Open IMF.
The Open IMF architecture provides the following benefits:
■ IMF can group events of different types under the same VCS resource and is
the central notification provider for kernel space events and user space events.
■ More agents can become IMF-aware by leveraging the notifications that are
available only from user space.
■ Agents can get notifications from IMF without having to interact with USNPs.
For example, Open IMF enables the AMF driver to get notifications from vxnotify,
the notification provider for Veritas Volume Manager. The AMF driver passes these
notifications on to the DiskGroup agent. For more information on the DiskGroup
agent, see the Cluster Server Bundled Agents Reference Guide.
Agent classifications
The different kinds of agents that work with VCS include bundled agents, enterprise
agents, and custom agents.
The engine uses agents to monitor and manage resources. It collects information
about resource states from the agents on the local system and forwards it to all
cluster members.
The local engine also receives information from the other cluster members to update
its view of the cluster. HAD operates as a replicated state machine (RSM). The
engine that runs on each node has a completely synchronized view of the resource
status on each node. Each instance of HAD follows the same code path for corrective
action, as required.
The RSM is maintained through the use of a purpose-built communications package.
The communications package consists of the protocols Low Latency Transport
(LLT) and Group Membership Services and Atomic Broadcast (GAB).
See “About inter-system cluster communications” on page 215.
The hashadow process monitors HAD and restarts it when required.
■ Cluster Communications
GAB’s second function is reliable cluster communications. GAB provides
guaranteed delivery of point-to-point and broadcast messages to all nodes. The
Veritas High Availability Engine uses a private IOCTL (provided by GAB) to tell
GAB that it is alive.
performance and fault resilience. If a link fails, traffic is redirected to the remaining
links.
■ Heartbeat
LLT is responsible for sending and receiving heartbeat traffic over network links.
The Group Membership Services function of GAB uses this heartbeat to
determine cluster membership.
In secure mode:
■ VCS uses platform-based authentication.
■ VCS does not store user passwords.
■ All VCS users are system and domain users and are configured using
fully-qualified user names. For example, administrator@vcsdomain. VCS
provides a single sign-on mechanism, so authenticated users do not need to
sign on each time to connect to a cluster.
For secure communication, VCS components acquire credentials from the
authentication broker that is configured on the local system. In VCS 6.0 and later,
a root and authentication broker is automatically deployed on each node when a
secure cluster is configured. The acquired certificate is used during authentication
and is presented to clients for the SSL handshake.
VCS and its components specify the account name and the domain in the following
format:
■ HAD Account
name = HAD
domain = VCS_SERVICES@Cluster UUID
■ CmdServer
name = CMDSERVER
domain = VCS_SERVICES@Cluster UUID
For instructions on how to set up Security Services while setting up the cluster, see
the Cluster Server installation documentation.
See “Enabling and disabling secure mode for the cluster” on page 161.
VCS Description
components
Veritas InfoScale A Web-based graphical user interface for monitoring and administering
Operations the cluster.
Manager
Install the Veritas InfoScale Operations Manager on a management
server outside the cluster to manage multiple clusters.
VCS command line The VCS command-line interface provides a comprehensive set of
interface (CLI) commands for managing and administering the cluster.
See “About administering VCS from the command line” on page 87.
Figure 1-5
NFSRestart
IP
Share
NFSRestart
DiskGroup
VCS starts the agents for DiskGroup, Mount, Share, NFS, NIC, IP, and NFSRestart
on all systems that are configured to run NFS_Group.
The resource dependencies are configured as follows:
■ The /home file system (configured as a Mount resource), requires that the disk
group (configured as a DiskGroup resource) is online before you mount.
■ The lower NFSRestart resource requires that the file system is mounted and
that the NFS daemons (NFS) are running.
■ The NFS export of the home file system (Share) requires that the lower
NFSRestart resource is up.
Introducing Cluster Server 48
Putting the pieces together
■ The high availability IP address, nfs_IP, requires that the file system (Share) is
shared and that the network interface (NIC) is up.
■ The upper NFSRestart resource requires that the IP address is up.
■ The NFS daemons and the disk group have no child dependencies, so they can
start in parallel.
■ The NIC resource is a persistent resource and does not require starting.
You can configure the service group to start automatically on either node in the
preceding example. It then can move or fail over to the second node on command
or automatically if the first node fails. On failover or relocation, to make the resources
offline on the first node, VCS begins at the top of the graph. When it starts them on
the second node, it begins at the bottom.
Chapter 2
About cluster topologies
This chapter includes the following topics:
Application
Application
This configuration is the simplest and most reliable. The redundant server is on
stand-by with full performance capability. If other applications are running, they
present no compatibility issues.
Application1
Application1 Application2
Application2
Redundant Server
Redundant Server
Most shortcomings of early N-to-1 cluster configurations are caused by the limitations
of storage architecture. Typically, it is impossible to connect more than two hosts
to a storage array without complex cabling schemes and their inherent reliability
problems, or expensive arrays with multiple controller ports.
Spare
About cluster topologies 53
About advanced failover configurations
SG SG SG
SG SG SG
SG SG
SG SG SG
SG SG
SG SG SG
SG = Service Group
If any node fails, each instance is started on a different node. this action ensures
that no single node becomes overloaded. This configuration is a logical evolution
of N + 1; it provides cluster standby capacity instead of a standby server.
N-to-N configurations require careful testing to ensure that all applications are
compatible. You must specify a list of systems on which a service group is allowed
to run in the event of a failure.
Site A Site B
SG SG SG SG
SG SG
SG SG
SG SG SG
SG
SG SG SG SG
SG = Service Group
About cluster topologies 56
Cluster topologies and storage configurations
A campus cluster requires two independent network links for heartbeat, two storage
arrays each providing highly available disks, and public network connectivity between
buildings on same IP subnet. If the campus cluster setup resides on different subnets
with one for each site, then use the VCS DNS agent to handle the network changes
or issue the DNS changes manually.
See “ How VCS campus clusters work” on page 530.
Service Group
Replication
You can also configure replicated data clusters without the ability to fail over locally,
but this configuration is not recommended.
See “ How VCS replicated data clusters work” on page 521.
Public Clients
Cluster A Network Redirected Cluster B
Application
Failover
Oracle Oracle
Group Group
Replicated
Data
Separate Separate
Storage Storage
Additional files that are similar to types.cf may be present if you enabled agents.
OracleTypes.cf, by default, is located at /etc/VRTSagents/ha/conf/Oracle/.
In a VCS cluster, the first system to be brought online reads the configuration file
and creates an internal (in-memory) representation of the configuration. Systems
that are brought online after the first system derive their information from systems
that are in the cluster.
You must stop the cluster if you need to modify the files manually. Changes made
by editing the configuration files take effect when the cluster is restarted. The node
where you made the changes should be the first node to be brought back online.
include "types.cf"
Cluster definition Defines the attributes of the cluster, the cluster name and the
names of the cluster users.
cluster demo (
UserNames = { admin = cDRpdxPmHzpS }
)
System definition Lists the systems designated as part of the cluster. The
system names must match the name returned by the
command uname -a.
system Server1
system Server2
Service group definition Service group definitions in main.cf comprise the attributes
of a particular service group.
group NFS_group1 (
SystemList = { Server1=0, Server2=1 }
AutoStartList = { Server1 }
)
DiskGroup DG_shared1 (
DiskGroup = shared1
)
Service group dependency To configure a service group dependency, place the keyword
clause requires in the service group declaration of the main.cf file.
Position the dependency clause before the resource
dependency specifications and after the resource declarations.
requires group_x
<dependency category>
<dependency location>
<dependency rigidity>
Note: Sample configurations for components of global clusters are listed separately.
See “ VCS global clusters: The building blocks” on page 463.
You can assign system priority explicitly in the SystemList attribute by assigning
numeric values to each system name. For example:
If you do not assign numeric priority values, VCS assigns a priority to the system
without a number by adding 1 to the priority of the preceding system. For example,
if the SystemList is defined as follows, VCS assigns the values SystemA = 0,
SystemB = 2, SystemC = 3.
Note that a duplicate numeric priority value may be assigned in some situations:
Initial configuration
When VCS is installed, a basic main.cf configuration file is created with the cluster
name, systems in the cluster, and a Cluster Manager user named admin with the
password password.
The following is an example of the main.cf for cluster demo and systems SystemA
and SystemB.
include "types.cf"
cluster demo (
UserNames = { admin = cDRpdxPmHzpS }
)
VCS configuration concepts 64
About the types.cf file
system SystemA (
)
system SystemB (
)
include "applicationtypes.cf"
include "listofsystems.cf"
include "applicationgroup.cf"
If you include other .cf files in main.cf, the following considerations apply:
■ Resource type definitions must appear before the definitions of any groups that
use the resource types.
In the following example, the applicationgroup.cf file includes the service group
definition for an application. The service group includes resources whose
resource types are defined in the file applicationtypes.cf. In this situation, the
applicationtypes.cf file must appear first in the main.cf file.
For example:
include "applicationtypes.cf"
include "applicationgroup.cf"
■ If you define heartbeats outside of the main.cf file and include the heartbeat
definition file, saving the main.cf file results in the heartbeat definitions getting
added directly to the main.cf file.
parameters are passed to the agents for starting, stopping, and monitoring
resources.
The following example illustrates a DiskGroup resource type definition for AIX:
type DiskGroup (
static keylist SupportedActions = {
"license.vfd", "disk.vfd", "udid.vfd",
"verifyplex.vfd", checkudid, numdisks,
campusplex, volinuse, joindg, splitdg,
getvxvminfo }
static int NumThreads = 1
static int OnlineRetryLimit = 1
static str ArgList[] = { DiskGroup,
StartVolumes, StopVolumes, MonitorOnly,
MonitorReservation, tempUseFence, PanicSystemOnDGLoss,
UmountVolumes, Reservation }
str DiskGroup
boolean StartVolumes = 1
boolean StopVolumes = 1
boolean MonitorReservation = 0
temp str tempUseFence = INVALID
boolean PanicSystemOnDGLoss = 0
int UmountVolumes
str Reservation = ClusterDefault
)
For another example, review the following main.cf and types.cf files that represent
an IP resource:
■ The high-availability address is configured on the interface, which is defined by
the Device attribute.
■ The IP address is enclosed in double quotes because the string contains periods.
See “About attribute data types” on page 66.
■ The VCS engine passes the identical arguments to the IP agent for online,
offline, clean, and monitor. It is up to the agent to use the arguments that it
requires. All resource names must be unique in a VCS cluster.
main.cf for AIX:
IP nfs_ip1 (
Device = en0
Address = "192.168.1.201"
Netmask = "255.255.255.0"
)
VCS configuration concepts 66
About VCS attributes
type IP (
static keylist RegList = { NetMask }
static keylist SupportedActions = { "device.vfd", "route.vfd" }
static str ArgList[] = { Device, Address, NetMask, Options,
RouteOptions, PrefixLen }
static int ContainerOpts{} = { RunInContainer=0, PassCInfo=1 }
str Device
str Address
str NetMask
str Options
str RouteOptions
int PrefixLen
)
For example, a string that defines a network interface such as en0 does
not require quotes since it contains only letters and numbers. However
a string that defines an IP address contains periods and requires quotes-
such as: "192.168.100.1".
Boolean A boolean is an integer, the possible values of which are 0 (false) and
1 (true).
Scalar A scalar has only one value. This is the default dimension.
Keylist A keylist is an unordered list of strings, and each string is unique within
the list. Use a comma (,) or a semi-colon (;) to separate values.
For example, to associate the average time and timestamp values with
an attribute:
■ Type-independent
Attributes that all agents (or resource types) understand. Examples:
RestartLimit and MonitorInterval; these can be set for any resource
type.
Typically, these attributes are set for all resources of a specific type.
For example, setting MonitorInterval for the IP resource type affects
all IP resources.
■ Type-dependent
Attributes that apply to a particular resource type. These attributes
appear in the type definition file (types.cf) for the agent.
Example: The Address attribute applies only to the IP resource type.
Attributes defined in the file types.cf apply to all resources of a
particular resource type. Defining these attributes in the main.cf file
overrides the values in the types.cf file for a specific resource.
For example, if you set StartVolumes = 1 for the DiskGroup types.cf,
it sets StartVolumes to True for all DiskGroup resources, by default.
If you set the value in main.cf , it overrides the value on a
per-resource basis.
■ Static
These attributes apply for every resource of a particular type. These
attributes are prefixed with the term static and are not included in
the resource’s argument list. You can override some static attributes
and assign them resource-specific values.
MultiNICA mnic (
Device@sys1 = { en0 = "166.98.16.103", en3 = "166.98.16.103" }
Device@sys2 = { en0 = "166.98.16.104", en3 = "166.98.16.104" }
NetMask = "255.255.255.0"
Options = "mtu m"
RouteOptions@sys1 = "-net 192.100.201.0 192.100.13.7"
RouteOptions@sys2 = "-net 192.100.201.1 192.100.13.8"
)
■ Attribute names
VCS passwords are restricted to 255 characters. You can enter a password of
maximum 255 characters.
PERL5LIB Root directory for Perl executables. (applicable only for Windows)
Default: /etc/VRTSvcs
Note: You cannot modify this variable.
VCS_CONN_HANDSHAKE_TIMEOUT Timeout in seconds after which the VCS engine (HAD) closes the client
connection that has not completed the handshake.
VCS_DEBUG_LOG_TAGS Enables debug logs for the VCS engine, VCS agents, and HA commands.
You must set VCS_DEBUG_LOG_TAGS before you start HAD or before
you execute HA commands.
See “Enabling debug logs for the VCS engine” on page 586.
Default: Fully qualified host name of the remote host as defined in the
VCS_HOST environment variable or in the .vcshost file.
VCS_DOMAINTYPE The type of Security domain such as unixpwd, nt, nis, nisplus, ldap, or vx.
Default: unixpwd
VCS_DIAG Directory where VCS dumps HAD cores and FFDC data.
VCS_ENABLE_LDF Designates whether or not log data files (LDFs) are generated. If set to
1, LDFs are generated. If set to 0, they are not.
Default: /opt/VRTSvcs
Note: You cannot modify this variable.
Default: h
VCS_GAB_TIMEOUT_SECS Timeout in seconds for HAD to send heartbeats to GAB under normal
system load conditions.
Default: 30 seconds
VCS_GAB_PEAKLOAD_TIMEOUT_SECS Timeout in seconds for HAD to send heartbeats to GAB under peak system
load conditions.
Default: 30 seconds
Default: SYSLOG
VCS_HAD_RESTART_TIMEOUT Set this variable to designate the amount of time the hashadow process
waits (sleep time) before restarting HAD.
Default: 0
Default: /var/VRTSvcs
Note: If this variable is added or modified, you must reboot the system
to apply the changes.
VCS configuration concepts 74
VCS environment variables
The value for this environment variable is defined in the service file at the
following location:
C:\Windows\System32\drivers\etc\services
If a new port number is not specified, the VCS engine starts with port
14141.
To change the default port number, you must create a new entry in the
service file at C:\Windows\System32\drivers\etc\services.
For example, if you want the external communication port for VCS to be
set to 14555, then you must create the following entries in the services
file:
vcs 14555/tcp
vcs 14555/udp
Note: The cluster-level attribute OpenExternalCommunicationPort
determines whether the port is open or not.
Default: /var/VRTSvcs
This directory is created in the tmp directory under the following conditions:
To set a variable, use the syntax appropriate for the shell in which VCS starts.
Typically, VCS starts in /bin/sh. For example, define the variables as:
Note: The startup and shutdown of AMF, LLT, GAB, VxFEN, and VCS engine are
inter-dependent. For a clean startup or shutdown of VCS, you must either enable
or disable the startup and shutdown modes for all these modules.
In a single-node cluster, you can disable the start and stop environment variables
for LLT, GAB, and VxFEN if you have not configured these kernel modules.
Table 3-3 describes the start and stop variables for VCS.
AMF_START Startup mode for the AMF driver. By default, the AMF driver is
enabled to start up after a system reboot.
/etc/default/amf
Default: 1
AMF_STOP Shutdown mode for the AMF driver. By default, the AMF driver is
enabled to stop during a system shutdown.
/etc/default/amf
Default: 1
LLT_START Startup mode for LLT. By default, LLT is enabled to start up after a
system reboot.
/etc/default/llt
Default: 1
VCS configuration concepts 76
VCS environment variables
Table 3-3 Start and stop environment variables for VCS (continued)
LLT_STOP Shutdown mode for LLT. By default, LLT is enabled to stop during
a system shutdown.
/etc/default/llt
Default: 1
GAB_START Startup mode for GAB. By default, GAB is enabled to start up after
a system reboot.
/etc/default/gab
Default: 1
GAB_STOP Shutdown mode for GAB. By default, GAB is enabled to stop during
a system shutdown.
/etc/default/gab
Default: 1
/etc/default/vxfen
Default: 1
/etc/default/vxfen
Default: 1
VCS_START Startup mode for VCS engine. By default, VCS engine is enabled to
start up after a system reboot.
/etc/default/vcs
Default: 1
VCS configuration concepts 77
VCS environment variables
Table 3-3 Start and stop environment variables for VCS (continued)
VCS_STOP Shutdown mode for VCS engine. By default, VCS engine is enabled
to stop during a system shutdown.
/etc/default/vcs
Default: 1
Section 2
Administration - Putting
VCS to work
■ User privileges for OS user groups for clusters running in secure mode
Cluster Cluster administrators are assigned full privileges. They can make
administrator configuration read-write, create and delete groups, set group
dependencies, add and delete systems, and add, modify, and delete
users. All group and resource operations are allowed. Users with Cluster
administrator privileges can also change other users’ privileges and
passwords.
Cluster operator Cluster operators can perform all cluster-level, group-level, and
resource-level operations, and can modify the user’s own password
and bring service groups online.
Note: Cluster operators can change their own passwords only if
configuration is in read or write mode. Cluster administrators can change
the configuration to the read or write mode.
Users with this role can be assigned group administrator privileges for
specific service groups.
Group operator Group operators can bring service groups and resources online and
take them offline. Users can also temporarily freeze or unfreeze service
groups.
Cluster guest Cluster guests have read-only access to the cluster, which means that
they can view the configuration, but cannot change it. They can modify
their own passwords only if the configuration is in read or write mode.
They cannot add or update users. Additionally, users with this privilege
can be assigned group administrator or group operator privileges for
specific service groups.
Note: By default, newly created users are assigned cluster guest
permissions.
Group guest Group guests have read-only access to the service group, which means
that they can view the configuration, but cannot change it. The group
guest role is available for clusters running in secure mode.
Cluster Administrator
includes privileges for
Cluster Operator
includes privileges for
Cluster Guest
includes privileges for
Group Administrator
includes privileges for Group Operator
includes privileges for GroupGuest
For example, cluster administrator includes privileges for group administrator, which
includes privileges for group operator.
If you do not have root privileges and the external communication port for VCS is
not open, you cannot run CLI commands. If the port is open, VCS prompts for your
VCS user name and password when you run haxxx commands.
You can use the halogin command to save the authentication information so that
you do not have to enter your credentials every time you run a VCS command.
See “Logging on to VCS” on page 106.
See “Cluster attributes” on page 722.
For example, you may decide that all users that are part of the OS administrators
group get administrative privileges to the cluster or to a specific service group.
Assigning a VCS role to a user group assigns the same VCS privileges to all
members of the user group, unless you specifically exclude individual users from
those privileges.
When you add a user to an OS user group, the user inherits VCS privileges assigned
to the user group.
Assigning VCS privileges to an OS user group involves adding the user group in
one (or more) of the following attributes:
■ AdministratorGroups—for a cluster or for a service group.
■ OperatorGroups—for a cluster or for a service group.
For example, user Tom belongs to an OS user group: OSUserGroup1.
Table 4-3 shows how to assign VCS privileges. FQDN denotes the fully qualified
domain name in these examples.
■ Administering LLT
■ Starting VCS
■ Stopping VCS
■ Logging on to VCS
■ Administering agents
■ Administering systems
Administering the cluster from the command line 87
About administering VCS from the command line
... Used to specify that the hagrp -modify group attribute value … [-sys
argument can have several system]
values.
See “About administering VCS from the command line” on page 87.
# uname -n
The entries in this file must correspond to those in the files /etc/llthosts and
/etc/llttab.
Note: VCS must be in read-write mode before you can change the configuration.
Note: Do not use the vcsencrypt utility when you enter passwords from the Java
console.
To encrypt a password
1 Run the utility from the command line.
# vcsencrypt -vcs
2 The utility prompts you to enter the password twice. Enter the password and
press Return.
Enter Password:
Enter Again:
3 The utility encrypts the password and displays the encrypted password. Use
this password to edit the VCS configuration file main.cf.
Note: Do not use the vcsencrypt utility when you enter passwords from the Java
console.
vcsencrypt -agent
Administering the cluster from the command line 90
About administering VCS from the command line
2 The utility prompts you to enter the password twice. Enter the password and
press Return.
3 The utility encrypts the password and displays the encrypted password. Use
this password to edit the VCS configuration file main.cf.
Note: You must be a root user or must have administrative privileges to generate
the security keys.
To generate a security key, perform the following steps on any cluster node:
1 Make the VCS configuration writable.
# haconf -makerw
# vcsencrypt -gensecinfo
Administering the cluster from the command line 91
About administering VCS from the command line
3 When prompted enter any pass phrase, having at least eight characters.
The utility generates the security key and displays the following message:
# haconf -makerw
# haconf -makerw
Within 60 days of choosing this option, you must install a valid license key
corresponding to the license level entitled or continue with keyless licensing by
managing the server or cluster with a management server.
# cd /opt/VRTS/bin
./vxlicinst -k XXXX-XXXX-XXXX-XXXX-XXXX-XXX
You must update licensing information on all nodes before proceeding to the
next step.
3 Update cluster-level licensing information:
# haclus -updatelic
# vxkeyless displayall
Administering LLT
You can use the LLT commands such as lltdump and lltconfig to administer
the LLT links. See the corresponding LLT manual pages for more information on
the commands.
See “About Low Latency Transport (LLT)” on page 43.
See “Displaying the cluster details and LLT version for LLT links” on page 95.
See “Adding and removing LLT links” on page 95.
See “Configuring aggregated interfaces under LLT” on page 97.
See “Configuring destination-based load balancing for LLT” on page 99.
Administering the cluster from the command line 95
Administering LLT
Displaying the cluster details and LLT version for LLT links
You can use the lltdump command to display the LLT version for a specific LLT
link. You can also display the cluster ID and node ID details.
See the lltdump(1M) manual page for more details.
To display the cluster details and LLT version for LLT links
◆ Run the following command to display the details:
# /opt/VRTSllt/lltdump -D -f link
For example, if en3 is connected to sys1, then the command displays a list of
all cluster IDs and node IDs present on the network link en3.
# /opt/VRTSllt/lltdump -D -f /dev/dlpi/en:3
lltdump : Configuration:
device : en3
sap : 0xcafe
promisc sap : 0
promisc mac : 0
cidsnoop : 1
=== Listening for LLT packets ===
cid nid vmaj vmin
3456 1 5 0
3456 3 5 0
83 0 4 0
27 1 3 7
3456 2 5 0
Note: When you add or remove LLT links, you need not shut down GAB or the high
availability daemon, had. Your changes take effect immediately, but are lost on the
next restart. For changes to persist, you must also update the /etc/llttab file.
Administering the cluster from the command line 96
Administering LLT
Where:
For link type ether, the path is followed by a colon (:) and an integer
which specifies the unit or PPA used by LLT to attach.
For link types udp and udp6, the device is the udp and udp6 device
path respectively.
bcast Broadcast address for the link type udp and rdma
SAP SAP to bind on the network links for link type ether
For example:
■ For ether link type:
Note: If you want the addition of LLT links to be persistent after reboot, then
you must edit the /etc/lltab with LLT entries.
# lltconfig -u devtag
# /etc/init.d/llt.rc stop
2 Add the following entry to the /etc/llttab file to configure an aggregated interface.
If the link command is valid for all systems, specify a dash (-).
Default is 0xcafe.
3 Restart LLT for the changes to take effect. Restart the other dependent modules
that you stopped in step 1.
# /etc/init.d/llt.rc start
Default is 0xcafe.
lltconfig -F linkburst:0
# /etc/default/amf
# /etc/init.d/amf.rc start
# /etc/default/amf
# /etc/init.d/amf.rc stop
2 If you want minimum downtime of the agents, use the following steps to unload
the AMF kernel driver:
■ Run the following command to disable the AMF driver even if agents are
still registered with it.
# amfconfig -Uof
Starting VCS
You can start VCS using one of the following approaches:
■ Using the installvcs -start command
■ Manually start VCS on each node
To start VCS
1 To start VCS using the installvcs program, perform the following steps on any
node in the cluster:
■ Log in as root user.
■ Run the following command:
# /opt/VRTS/install/installvcs -start
2 To start VCS manually, run the following commands on each node in the cluster:
■ Log in as root user.
■ Start LLT and GAB. Start I/O fencing if you have configured it. Skip this
step if you want to start VCS on a single-node cluster.
Optionally, you can start AMF if you want to enable intelligent monitoring.
# hastart
# hastart -onenode
Administering the cluster from the command line 102
Stopping VCS
See “Starting the VCS engine (HAD) and related processes” on page 102.
# gabconfig -a
Make sure that port a and port h memberships exist in the output for all nodes
in the cluster. If you configured I/O fencing, port b membership must also exist.
When VCS is started, it checks the state of its local configuration file and registers
with GAB for cluster membership. If the local configuration is valid, and if no other
system is running VCS, it builds its state from the local configuration file and enters
the RUNNING state.
If the configuration on all nodes is invalid, the VCS engine waits for manual
intervention, or for VCS to be started on a system that has a valid configuration.
See “System states” on page 651.
To start the VCS engine
◆ Run the following command:
# hastart
To start the VCS engine when all systems are in the ADMIN_WAIT state
◆ Run the following command from any system in the cluster to force VCS to
use the configuration file from the system specified by the variable system:
# hastart -onenode
Stopping VCS
You can stop VCS using one of the following approaches:
Administering the cluster from the command line 103
Stopping VCS
# /opt/VRTS/install/installvcs -stop
2 To stop VCS manually, run the following commands on each node in the cluster:
■ Log in as root user.
■ Take the VCS service groups offline and verify that the groups are offline.
# hastop -local
See “Stopping the VCS engine and related processes” on page 104.
■ Verify that the VCS engine port h is closed.
# gabconfig -a
■ Stop I/O fencing if you have configured it. Stop GAB and then LLT.
Option Description
-all Stops HAD on all systems in the cluster and takes all service groups
offline.
-local Stops HAD on the system on which you typed the command
-force Allows HAD to be stopped without taking service groups offline on the
system. The value of the EngineShutdown attribute does not influence
the behavior of the -force option.
-evacuate When combined with -local or -sys, migrates the system’s active
service groups to another system in the cluster, before the system is
stopped.
-noautodisable Ensures the service groups that can run on the node where the hastop
command was issued are not autodisabled. This option can be used
with -evacuate but not with -force.
and the OnGrpCnt attributes are non-zero. VCS continues to wait for the service
groups to go offline before it shuts down.
See “Troubleshooting resources” on page 610.
About stopping VCS with options other than the -force option
When VCS is stopped by options other than -force on a system with online service
groups, the groups that run on the system are taken offline and remain offline. VCS
indicates this by setting the attribute IntentOnline to 0. Use the option -force to
enable service groups to continue being online while the VCS engine (HAD) is
brought down and restarted. The value of the IntentOnline attribute remains
unchanged after the VCS engine restarts.
Note: VCS does not consider this attribute when the hastop is issued with the
following options: -force or -local -evacuate -noautodisable.
Configure one of the following values for the attribute depending on the desired
functionality for the hastop command:
Table 5-3 shows the engine shutdown values for the attribute.
EngineShutdown Description
Value
DisableClusStop Do not process the hastop -all command; process all other hastop
commands.
PromptClusStop Prompt for user confirmation before you run the hastop -all
command; process all other hastop commands.
PromptLocal Prompt for user confirmation before you run the hastop -local
command; process all other hastop commands except hastop -sys
command.
Administering the cluster from the command line 106
Logging on to VCS
EngineShutdown Description
Value
PromptAlways Prompt for user confirmation before you run any hastop command.
Logging on to VCS
VCS prompts for user name and password information when non-root users run
haxxx commands. Use the halogin command to save the authentication information
so that you do not have to enter your credentials every time you run a VCS
command. Note that you may need specific privileges to run VCS commands.
When you run the halogin command, VCS stores encrypted authentication
information in the user’s home directory. For clusters that run in secure mode, the
command also sets up a trust relationship and retrieves a certificate from an
authentication broker.
If you run the command for different hosts, VCS stores authentication information
for each host. After you run the command, VCS stores the information until you end
the session.
For clusters that run in secure mode, you also can generate credentials for VCS to
store the information for 24 hours or for eight years and thus configure VCS to not
prompt for passwords when you run VCS commands as non-root users.
Administering the cluster from the command line 107
Logging on to VCS
2 Define the node on which the VCS commands will be run. Set the VCS_HOST
environment variable to the name of the node. To run commands in a remote
cluster, you set the variable to the virtual IP address that was configured in the
ClusterService group.
3 Log on to VCS:
# halogin -endallsessions
After you end a session, VCS prompts you for credentials every time you run
a VCS command.
Administering the cluster from the command line 108
About managing VCS configuration files
# hasys -state
Verifying a configuration
Use hacf to verify (check syntax of) the main.cf and the type definition file, types.cf.
VCS does not run if hacf detects errors in the configuration.
Administering the cluster from the command line 109
About managing VCS configuration files
To verify a configuration
◆ Run the following command:
BackupInterval 0
# haconf -makerw
Saving a configuration
When you save a configuration, VCS renames the file main.cf.autobackup to main.cf.
VCS also save your running configuration to the file main.cf.autobackup.
If have not configured the BackupInterval attribute, VCS saves the running
configuration.
See “Scheduling automatic backups for VCS configuration files” on page 109.
Administering the cluster from the command line 110
About managing VCS users from the command line
To save a configuration
◆ Run the following command
# haconf -makerw
Specify the user name and the domain name to add a user on multiple nodes in
the cluster. This option requires multiple entries for a user, one for each node.
You cannot assign or change passwords for users when VCS is running in secure
mode.
The commands to add, modify, and delete a user must be executed only as root
or administrator and only if the VCS configuration is in read/write mode.
See “Setting the configuration to read or write” on page 110.
Note: You must add users to the VCS configuration to monitor and administer VCS
from the graphical user interface Cluster Manager.
Adding a user
Users in the category Cluster Guest cannot add users.
To add a user
1 Set the configuration to read/write mode:
# haconf -makerw
# haconf -makerw
For example,
Modifying a user
Users in the category Cluster Guest cannot modify users.
You cannot modify a VCS user in clusters that run in secure mode.
To modify a user
1 Set the configuration to read or write mode:
# haconf -makerw
Deleting a user
You can delete a user from the VCS configuration.
Administering the cluster from the command line 114
About querying VCS
To delete a user
1 Set the configuration to read or write mode:
# haconf -makerw
2 For users with Administrator and Operator access, remove their privileges:
Displaying a user
This topic describes how to display a list of users and their privileges.
To display a list of users
◆ Type the following command:
# hauser -list
# hauser -display
the VCS configuration or system states can be executed by all users: you do not
need root privileges.
The policy values can be any one of the following values: Priority, RoundRobin,
Load or BiggestAvailable.
Note: You cannot use the -forecast option when the service group state is
in transition. For example, VCS rejects the command if the service group is in
transition to an online state or to an offline state.
The -forecast option is supported only for failover service groups. In case of
offline failover service groups, VCS selects the target system based on the
service group’s failover policy.
The BiggestAvailable policy is applicable only when the service group attribute
Load is defined and cluster attribute Statistics is enabled.
The actual service group FailOverPolicy can be configured as any policy, but
the forecast is done as though FailOverPolicy is set to BiggestAvailable.
Querying resources
This topic describes how to perform a query on resources.
To display a resource’s dependencies
◆ Enter the following command:
Note: If you use this command to query atleast resource dependency, the
dependency is displayed as a normal 1-to-1 dependency.
# hatype -list
Administering the cluster from the command line 118
About querying VCS
Querying agents
Table 5-4 lists the run-time status for the agents that the haagent -display
command displays.
Faults Indicates the number of agent faults within one hour of the time the
fault began and the time the faults began.
Querying systems
This topic describes how to perform a query on systems.
To display a list of systems in the cluster
◆ Type the following command:
# hasys -list
If you do not specify a system, the command displays attribute names and
values for all systems.
To display the value of a specific system attribute
◆ Type the following command:
The -util option is applicable only if you set the cluster attribute Statistics to
Enabled and define at least one key in the cluster attribute HostMeters.
The command also indicates if the HostUtilization, and HostAvailableForecast
values are stale.
Querying clusters
This topic describes how to perform a query on clusters.
Administering the cluster from the command line 120
About querying VCS
# haclus -display
Querying status
This topic describes how to perform a query on status of service groups in the
cluster.
Note: Run the hastatus command with the -summary option to prevent an incessant
output of online state transitions. If the command is used without the option, it will
repeatedly display online state transitions until it is interrupted by the command
CTRL+C.
To display the status of all service groups in the cluster, including resources
◆ Type the following command:
# hastatus
If you do not specify a service group, the status of all service groups appears.
The -sound option enables a bell to ring each time a resource faults.
The -time option prints the system time at which the status was received.
To display the status of service groups and resources on specific systems
◆ Type the following command:
# hastatus -summary
# hamsg -help
# hamsg -list
The option -path specifies where hamsg looks for the specified LDF. If not
specified, hamsg looks for files in the default directory:
/var/VRTSvcs/ldf
To display specific LDF data
◆ Type the following command:
-any Specifies hamsg return messages that match any of the specified
query options.
-otype Specifies hamsg return messages that match the specified object
type
-oname Specifies hamsg return messages that match the specified object
name.
-path Specifies where hamsg looks for the specified LDF. If not specified,
hamsg looks for files in the default directory:
/var/VRTSvcs/ldf
Attribute=~Value (the value is the prefix of the attribute, for example a query for
the state of a resource = ~FAULTED returns all resources whose state begins with
FAULTED.)
Multiple conditional statements can be used and imply AND logic.
You can only query attribute-value pairs that appear in the output of the command
hagrp -display.
The variable service_group must be unique among all service groups defined
in the cluster.
This command initializes a service group that is ready to contain various
resources. To employ the group properly, you must populate its SystemList
attribute to define the systems on which the group may be brought online and
taken offline. (A system list is an association of names and integers that
represent priority values.)
To delete a service group
◆ Type the following command:
Note that you cannot delete a service group until all of its resources are deleted.
You may also define a service group as parallel. To set the Parallel attribute
to 1, type the following command. (Note that the default for this attribute is 0,
which designates the service group as a failover group.):
You cannot modify this attribute if resources have already been added to the
service group.
You can modify the attributes SystemList, AutoStartList, and Parallel only by
using the command hagrp -modify. You cannot modify attributes created by
the system, such as the state of the service group.
You must take the service group offline on the system that is being modified.
Administering the cluster from the command line 126
About administering service groups
When you add a system to a service group’s system list, the system must have
been previously added to the cluster. When you use the command line, you can
use the hasys -add command.
When you delete a system from a service group’s system list, the service group
must not be online on the system to be deleted.
If you attempt to change a service group’s existing system list by using hagrp
-modify without other options (such as -add or -update) the command fails.
To start a service group on a system and bring online only the resources
already online on another system
◆ Type the following command:
If the service group does not have resources online on the other system, the
service group is brought online on the original system and the checkpartial
option is ignored.
Note that the checkpartial option is used by the Preonline trigger during
failover. When a service group that is configured with Preonline =1 fails over
to another system (system 2), the only resources brought online on system 2
are those that were previously online on system 1 prior to failover.
To bring a service group and its associated child service groups online
◆ Type one of the following commands:
■ # hagrp -online -propagate service_group -sys system
Note: See the man pages associated with the hagrp command for more information
about the -propagate option.
To take a service group offline only if all resources are probed on the system
◆ Type the following command:
To take a service group and its associated parent service groups offline
◆ Type one of the following commands:
■ # hagrp -offline -propagate service_group -sys system
Note: See the man pages associated with the hagrp command for more information
about the -propagate option.
A service group can be switched only if it is fully or partially online. The -switch
option is not supported for switching hybrid service groups across system
zones.
Switch parallel global groups across clusters by using the following command:
VCS brings the parallel service group online on all possible nodes in the remote
cluster.
A service group can be migrated only if it is fully online. The -migrate option
is supported only for failover service groups and for resource types that have
the SupportedOperations attribute set to migrate.
See “Resource type attributes” on page 664.
The service group must meet the following requirements regarding configuration:
■ A single mandatory resource that can be migrated, having the
SupportedOperations attribute set to migrate and the Operations attribute set
to OnOff
■ Other optional resources with Operations attribute set to None or OnOnly
The -migrate option is supported for the following configurations:
■ Stand alone service groups
■ Service groups having one or both of the following configurations:
Administering the cluster from the command line 129
About administering service groups
■ Parallel child service groups with online local soft or online local firm
dependencies
■ Parallel or failover parent service group with online global soft or online
remote soft dependencies
The option -persistent enables the freeze to be remembered when the cluster
is rebooted.
To unfreeze a service group (reenable online, offline, and failover operations)
◆ Type the following command:
This attribute enables Priority based failover for high priority service group.
During a failover, VCS checks the load requirement for the high-priority service
group and evaluates the best available target that meets the load requirement.
If none of the available systems meet the load requirement, then VCS checks
for any lower priority groups that can be offlined to meet the load requirement.
VCS performs the following checks, before failing over the high-priority service
group:
■ Check the best target system that meets the load requirement of the high-priority
service group.
■ If no system meets the load requirement, create evacuation plan for each target
system to see which system will have the minimum disruption.
■ Evacuation plan consists of a list of low priority service groups that need to be
taken offline to meet the load requirement. Disruption factor is calculated for the
Administering the cluster from the command line 131
About administering service groups
system along with the evacuation plan. Disruption factor is based on the priority
of the service groups that will be brought offline or evacuated. The following
table shows the mapping of service group priority with the disruption factor:
4 1
3 10
2 100
■ Choose the System with minimum disruption as the target system and execute
evacuation plan. This initiates the offline of the low priority service groups. Plan
is visible under attribute ( Group::EvacList) . After all the groups in the evacuation
plan are offline, initiate online of high priority service group.
To disable priority based failover of your service group:
■ Run the following command on the service group:
Clearing a resource initiates the online process previously blocked while waiting
for the resource to become clear.
■ If system is specified, all faulted, non-persistent resources are cleared from
that system only.
■ If system is not specified, the service group is cleared on all systems in the
group’s SystemList in which at least one non-persistent resource has faulted.
Administering the cluster from the command line 132
About administering service groups
Note: The flush operation does not halt the resource operations (such as online,
offline, migrate, and clean) that are running. If a running operation succeeds after
a flush command was fired, the resource state might change depending on the
operation.
Use the command hagrp -flush to clear the internal state of VCS. The hagrp
-flush command transitions resource state from ‘waiting to go online’ to ‘not waiting’.
You must use the hagrp -flush -force command to transition resource state
from ‘waiting to go offline’ to ‘not waiting’.
To flush a service group on a system
◆ Type the following command:
#!/bin/ksh
PATH=/opt/VRTSvcs/bin:$PATH; export PATH
if [ $# -ne 1 ]; then
echo "usage: $0 <system name>"
exit 1
fi
hagrp -list |
while read grp sys junk
do
locsys="${sys##*:}"
case "$locsys" in
"$1")
hagrp -flush "$grp" -sys "$locsys"
;;
esac
done
# haflush systemname
Administering agents
Under normal conditions, VCS agents are started and stopped automatically.
To start an agent
◆ Run the following command:
To stop an agent
◆ Run the following command:
The -force option stops the agent even if the resources for the agent are
online. Use the -force option when you want to upgrade an agent without taking
its resources offline.
Administering the cluster from the command line 135
About administering resources
Note: The addition of resources on the command line requires several steps, and
the agent must be prevented from managing the resource until the steps are
completed. For resources defined in the configuration file, the steps are completed
before the agent is started.
Adding resources
This topic describes how to add resources to a service group or remove resources
from a service group.
To add a resource
◆ Type the following command:
The resource name must be unique throughout the cluster. The resource type
must be defined in the configuration language. The resource belongs to the
group service_group.
Administering the cluster from the command line 136
About administering resources
Deleting resources
This topic describes how to delete resources from a service group.
To delete a resource
◆ Type the following command:
VCS does not delete online resources. However, you can enable deletion of
online resources by changing the value of the DeleteOnlineResources attribute.
See “Cluster attributes” on page 722.
To delete a resource forcibly, use the -force option, which takes the resoure
offline irrespective of the value of the DeleteOnlineResources attribute.
The agent managing the resource is started on a system when its Enabled
attribute is set to 1 on that system. Specifically, the VCS engine begins to
monitor the resource for faults. Agent monitoring is disabled if the Enabled
attribute is reset to 0.
Administering the cluster from the command line 137
About administering resources
Note that global attributes cannot be modified with the hares -local command.
Table 5-5 lists the commands to be used to localize attributes depending on their
dimension.
Note: If multiple values are specified and if one is invalid, VCS returns
an error for the invalid value, but continues to process the others. In
the following example, if sysb is part of the attribute SystemList, but
sysa is not, sysb is deleted and an error message is sent to the log
regarding sysa.
# haconf -makerw
3 If required, change the values of the MonitorFreq key and the RegisterRetryLimit
key of the IMF attribute.
See the Cluster Server Bundled Agents Reference Guide for agent-specific
recommendations to set these attributes.
Administering the cluster from the command line 141
About administering resources
5 Make sure that the AMF kernel driver is configured on all nodes in the cluster.
/etc/init.d/amf.rc status
Configure the AMF driver if the command output returns that the AMF driver
is not loaded or not configured.
See “Administering the AMF kernel driver” on page 99.
6 Restart the agent. Run the following commands on each node.
# haconf -makerw
2 To disable intelligent resource monitoring for all the resources of a certain type,
run the following command:
Note: VCS provides haimfconfig script to enable or disable the IMF functionality
for agents. You can use the script with VCS in running or stopped state. Use the
script to enable or disable IMF for the IMF-aware bundled agents, enterprise agents,
and custom agents.
See “Enabling and disabling IMF for agents by using script” on page 142.
haimfconfig -enable
haimfconfig -disable
This command enables IMF for the specified agents. It also configures and
loads the AMF module on the system if the module is not already loaded. If
the agent is a custom agent, the command prompts you for the Mode and
MonitorFreq values if Mode value is not configured properly.
Note: The command prompts you whether you want to make the
configuration changes persistent. If you choose No, the command exits. If
you choose Yes, it enables IMF and dumps the configuration by using the
haconf -dump -makero command.
■ If VCS is not running, changes to the Mode value (for all specified agents)
and MonitorFreq value (for all specified custom agents only) need to be
made by modifying the VCS configuration files. Before the command makes
any changes to configuration files, it prompts you for a confirmation. If you
choose Yes, it modifies the VCS configuration files. IMF gets enabled for
the specified agent when VCS starts.
Example
The command enables IMF for the Mount agent and the Application agent.
To disable IMF for a set of agents
◆ Run the following command:
This command disables IMF for specified agents by changing the Mode value
to 0 for each agent and for all resources that had overridden the Mode values.
Administering the cluster from the command line 144
About administering resources
■ If VCS is running, the command changes the Mode value of the agents and
the overridden Mode values of all resources of these agents to 0.
Note: The command prompts you whether you want to make the
configuration changes persistent. If you choose No, the command exits. If
you choose Yes, it enables IMF and dumps the configuration by using the
haconf -dump -makero command.
■ If VCS is not running, any change to the Mode value needs to be made by
modifying the VCS configuration file. Before it makes any changes to
configuration files, the command prompts you for a confirmation. If you
choose Yes, it sets the Mode value to 0 in the configuration files.
Example
The command disables IMF for the Mount agent and Application agent.
This command sets the value of AMF_START to 1 in the AMF configuration file.
It also configures and loads the AMF module on the system.
To disable AMF on a system
◆ Run the following command:
This command unconfigures and unloads the AMF module on the system if
AMF is configured and loaded. It also sets the value of AMF_START to 0 in the
AMF configuration file.
Note: AMF is not directly unconfigured by this command if the agent is registered
with AMF. The script prompts you if you want to disable AMF for all agents forcefully
before it unconfigures AMF.
Administering the cluster from the command line 145
About administering resources
To view the changes made when the script disables IMF for an agent
◆ Run the following command:
haimfconfig -display
Examples:
If IMF is disabled for Mount agent (when Mode is set to 0) and enabled for rest of
the installed IMF-aware agents:
haimfconfig -display
Administering the cluster from the command line 146
About administering resources
#Agent STATUS
Application ENABLED
Mount DISABLED
Process ENABLED
DiskGroup ENABLED
WPAR ENABLED
If IMF is disabled for Mount agent (when VCS is running, agent is running and not
registered with AMF module) and enabled for rest of the installed IMF-aware agents:
haimfconfig -display
#Agent STATUS
Application ENABLED
Mount DISABLED
Process ENABLED
DiskGroup ENABLED
WPAR ENABLED
If IMF is disabled for all installed IMF-aware agents (when AMF module is not
loaded):
haimfconfig -display
#Agent STATUS
Application DISABLED
Mount DISABLED
Process DISABLED
DiskGroup DISABLED
WPAR DISABLED
If IMF is partially enabled for Mount agent (Mode is set to 3 at type level and to 0
at resource level for some resources) and enabled fully for rest of the installed
IMF-aware agents.
haimfconfig -display
Administering the cluster from the command line 147
About administering resources
#Agent STATUS
Application ENABLED
Mount ENABLED|PARTIAL
Process ENABLED
DiskGroup ENABLED
WPAR ENABLED
If IMF is partially enabled for Mount agent (Mode is set to 0 at type level and to 3
at resource level for some resources) and enabled fully for rest of the installed
IMF-aware agents:
haimfconfig -display
#Agent STATUS
Application ENABLED
Mount ENABLED|PARTIAL
Process ENABLED
DiskGroup ENABLED
WPAR ENABLED
To link resources
◆ Enter the following command:
To unlink resources
◆ Enter the following command:
In the above example, the parent resource (res1) depends on the child resources
(res2, res3, res4, res5, and res6). The parent resource can be brought online only
when two or more resources come online and the parent resource can remain online
only until two or more child resources are online.
The command stops all parent resources in order before taking the specific
resource offline.
To take a resource offline and propagate the command to its children
◆ Type the following command:
Probing a resource
This topic describes how to probe a resource.
Administering the cluster from the command line 150
About administering resource types
Though the command may return immediately, the monitoring process may
not be completed by the time the command returns.
Clearing a resource
This topic describes how to clear a resource.
To clear a resource
◆ Type the following command:
Initiate a state change from RESOURCE_FAULTED to RESOURCE_OFFLINE:
Clearing a resource initiates the online process previously blocked while waiting
for the resource to become clear. If system is not specified, the fault is cleared
on each system in the service group’s SystemList attribute.
See “Clearing faulted resources in a service group” on page 131.
This command also clears the resource’s parents. Persistent resources whose
static attribute Operations is defined as None cannot be cleared with this
command and must be physically attended to, such as replacing a raw disk.
The agent then updates the status automatically.
You must delete all resources of the type before deleting the resource type.
To add or modify resource types in main.cf without shutting down VCS
◆ Type the following command:
Note: Make sure that the path to the SourceFile exists on all nodes before you
run this command.
type FileOnOff (
static str AgentClass = RT
static str AgentPriority = 10
static str ScriptClass = RT
static str ScriptPriority = 40
static str ArgList[] = { PathName }
str PathName
)
For example, to set the AgentPriority attribute of the FileOnOff resource to 10,
type:
For example, to set the ScriptClass of the FileOnOff resource to 40, type:
Administering systems
Administration of systems includes tasks such as modifying system attributes,
freezing or unfreezing systems, and running commands.
To modify a system’s attributes
◆ Type the following command:
-evacuate Fails over the system’s active service groups to another system
in the cluster before the freeze is enabled.
The utility configures the cluster UUID on the cluster nodes based on
whether a cluster UUID exists on any of the VCS nodes:
■ If no cluster UUID exists or if the cluster UUID is different on the cluster
nodes, then the utility does the following:
■ Generates a new cluster UUID using the /opt/VRTSvcs/bin/osuuid.
■ Creates the /etc/vx/.uuids/clusuuid file where the utility stores the
cluster UUID.
■ Configures the cluster UUID on all nodes in the cluster.
■ If a cluster UUID exists and if the UUID is same on all the nodes, then the
utility retains the UUID.
Use the -force option to discard the existing cluster UUID and create new
cluster UUID.
■ If some nodes in the cluster have cluster UUID and if the UUID is the same,
then the utility configures the existing UUID on the remaining nodes.
The utility copies the cluster UUID from a system that is specified using
the-from_sys option to all the systems that are specified using the -to_sys
option.
had -version
hastart -version
2 Run one of the following commands to retrieve information about the engine
version.
had -v
hastart -v
6 Remove the entries for the node from the /etc/llthosts file on each remaining
node.
7 Change the node count entry from the /etc/gabtab file on each remaining
node.
8 Unconfigure GAB and LLT on the node leaving the cluster.
9 Remove VCS and other filesets from the node.
10 Remove GAB and LLT configuration files from the node.
The following is a list of changes that you can make and information concerning
ports in a VCS environment:
■ Changing VCS's default port.
Add an entry for a VCS service name In /etc/services file, for example:
Where 3333 in the example is the port number where you want to run VCS.
When the engine starts, it listens on the port that you configured above (3333)
for the service. You need to modify the port to the /etc/services file on all the
nodes of the cluster.
■ You do not need to make changes for agents or HA commands. Agents and
HA commands use locally present UDS sockets to connect to the engine, not
TCP/IP connections.
■ You do not need to make changes for HA commands that you execute to talk
to a remotely running VCS engine (HAD), using the facilities that the VCS_HOST
environment variable provides. You do not need to change these settings
because the HA command queries the /etc/services file and connects to the
appropriate port.
■ For the Java Console GUI, you can specify the port number that you want the
GUI to connect to while logging into the GUI. You have to specify the port number
that is configured in the /etc/services file (for example 3333 above).
To change the default port
1 Stop VCS.
2 Add an entry service name vcs-app in /etc/services.
3 You need to modify the port to the /etc/services file on all the nodes of the
cluster.
4 Restart VCS.
5 Check the port.
Note: For the attributes EngineClass and EnginePriority, changes are effective
immediately. For ProcessClass and ProcessPriority changes become effective only
for processes fired after the execution of the haclus command.
cluster vcs-india (
EngineClass = "RT"
EnginePriority = "20"
ProcessClass = "TS"
ProcessPriority = "40"
)
# /opt/VRTS/install/installer -security
To enable secure mode with FIPS, start the installer program with the
-security -fips option.
If you already have secure mode enabled and need to move to secure mode
with FIPS, complete the steps in the following procedure.
See “Migrating from secure mode to secure mode with FIPS” on page 163.
The installer displays the directory where the logs are created.
3 Review the output as the installer verifies whether VCS configuration files exist.
The installer also verifies that VCS is running on all systems in the cluster.
4 The installer checks whether the cluster is in secure mode or non-secure mode.
If the cluster is in non-secure mode, the installer prompts whether you want to
enable secure mode.
5 Review the output as the installer modifies the VCS configuration files to enable
secure mode in the cluster, and restarts VCS.
To disable secure mode in a VCS cluster
1 Start the installer program with the -security option.
# /opt/VRTS/install/installer -security
The installer displays the directory where the logs are created.
2 Review the output as the installer proceeds with a verification.
Administering the cluster from the command line 163
Using the -wait option in scripts that use VCS commands
3 The installer checks whether the cluster is in secure mode or non-secure mode.
If the cluster is in secure mode, the installer prompts whether you want to
disable secure mode.
4 Review the output as the installer modifies the VCS configuration files to disable
secure mode in the cluster, and restarts VCS.
# installvcs -security
■ /var/VRTSat
■ /var/VRTSat_lhc
Use the -sys option when the scope of the attribute is local.
Use the -sys option when the scope of the attribute is local.
See the man pages associated with these commands for more information.
The command runs the infrastructure check and verifies whether the system
<sysname> has the required infrastructure to host the resource <resname>,
should a failover require the resource to come online on the system. For the
variable <sysname>, specify the name of a system on which the resource is
offline. The variable <vfdaction> specifies the Action defined for the agent. The
"HA fire drill checks" for a resource type are defined in the SupportedActions
attribute for that resource and can be identified with the .vfd suffix.
Administering the cluster from the command line 165
About administering simulated clusters from the command line
The command runs the infrastructure check and verifies whether the system
<sysname> has the required infrastructure to host resources in the service
group <grpname> should a failover require the service group to come online
on the system. For the variable <sysname>, specify the name of a system on
which the group is offline
To fix detected errors
◆ Type the following command.
The variable <vfdaction> represents the check that reported errors for the
system <sysname>. The "HA fire drill checks" for a resource type are defined
in the SupportedActions attribute for that resource and can be identified with
the .vfd suffix.
Agent Description
DiskGroup Brings Veritas Volume Manager (VxVM) disk groups online and offline,
monitors them, and make them highly available. The DiskGroup agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
DiskGroupSnap Brings resources online and offline and monitors disk groups used for
fire drill testing. The DiskGroupSnap agent enables you to verify the
configuration integrity and data integrity in a Campus Cluster
environment with VxVM stretch mirroring. The service group that
contains the DiskGroupSnap agent resource has an offline local
dependency on the application’s service group. This is to ensure that
the fire drill service group and the application service group are not
online at the same site.
VolumeSet Brings Veritas Volume Manager (VxVM) volume sets online and offline,
and monitors them. Use the VolumeSet agent to make a volume set
highly available. VolumeSet resources depend on DiskGroup resources.
Mount Brings resources online and offline, monitors file system or NFS client
mount points, and make them highly available. The Mount agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
The Mount agent can be used with the DiskGroup, LVMVG, and Volume
agents to provide storage to an application.
See “About resource monitoring” on page 37.
See the Cluster Server Bundled Agents Reference Guide for a detailed description
of the following agents.
Table 6-2 shows the Network agents and their description.
Agent Description
MultiNICB Works with the IPMultiNICB agent. The MultiNICB agent allows IP
addresses to fail over to multiple NICs on the same system before VCS
tries to fail over to another system. You can use the agent to make IP
addresses on multiple-adapter systems highly available or to monitor
them. No dependencies exist for the MultiNICB resource.
IPMultiNICB Works with the MultiNICB agent. The IPMultiNICB agent configures
and manages virtual IP addresses (IP aliases) on an active network
device that the MultiNICB resource specifies. When the MultiNICB agent
reports a particular interface as failed, the IPMultiNICB agent moves
the IP address to the next active interface. IPMultiNICB resources
depend on MultiNICB resources.
Configuring applications and resources in VCS 170
VCS bundled agents for UNIX
Agent Description
DNS Updates and monitors the mapping of host names to IP addresses and
canonical names (CNAME). The DNS agent performs these tasks for
a DNS zone when it fails over nodes across subnets (a wide-area
failover). Use the DNS agent when the failover source and target nodes
are on different subnets. The DNS agent updates the name server and
allows clients to connect to the failed over instance of the application
service.
Agent Description
NFS Manages NFS daemons which process requests from NFS clients. The
NFS Agent manages the rpc.nfsd/nfsd daemon and the rpc.mountd
daemon on the NFS server. If NFSv4 support is enabled, it also
manages the rpc.idmapd/nfsmapid daemon.
Share Shares, unshares, and monitors a single local resource for exporting
an NFS file system that is mounted by remote systems. Share resources
depend on NFS. In an NFS service group, the IP family of resources
depends on Share resources.
SambaServer Starts, stops, and monitors the smbd process as a daemon. You can
use the SambaServer agent to make an smbd daemon highly available
or to monitor it. The smbd daemon provides Samba share services.
The SambaServer agent, with SambaShare and NetBIOS agents, allows
a system running a UNIX or UNIX-like operating system to provide
services using the Microsoft network protocol. It has no dependent
resource.
Configuring applications and resources in VCS 171
VCS bundled agents for UNIX
Agent Description
SambaShare Adds, removes, and monitors a share by modifying the specified Samba
configuration file. You can use the SambaShare agent to make a Samba
Share highly available or to monitor it. SambaShare resources depend
on SambaServer, NetBios, and Mount resources.
NetBIOS Starts, stops, and monitors the nmbd daemon. You can use the NetBIOS
agent to make the nmbd daemon highly available or to monitor it. The
nmbd process broadcasts the NetBIOS name, or the name by which
the Samba server is known in the network. The NetBios resource
depends on the IP or the IPMultiNIC resource.
Agent Description
Apache Brings an Apache Server online, takes it offline, and monitors its
processes. Use the Apache Web server agent with other agents to
make an Apache Web server highly available. This type of resource
depends on IP and Mount resources. The Apache agent can detect
when an Apache Web server is brought down gracefully by an
administrator. When Apache is brought down gracefully, the agent does
not trigger a resource fault even though Apache is down.
Table 6-4 Services and Application agents and their description (continued)
Agent Description
Application Brings applications online, takes them offline, and monitors their status.
Use the Application agent to specify different executables for the online,
offline, and monitor routines for different programs. The executables
must exist locally on each node. You can use the Application agent to
provide high availability for applications that do not have bundled agents,
enterprise agents, or custom agents. This type of resource can depend
on IP, IPMultiNIC, and Mount resources. The Application agent supports
both IMF-based monitoring and traditional poll-based monitoring.
Process Starts, stops, and monitors a process that you specify. Use the Process
agent to make a process highly available. This type of resource can
depend on IP, IPMultiNIC, and Mount resources. The Process agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
ProcessOnOnly Starts and monitors a process that you specify. Use the agent to make
a process highly available. No child dependencies exist for this resource.
WPAR Brings workload partitions online, takes them offline, and monitors their
status. The WPAR agent supports both IMF-based monitoring and
traditional poll-based monitoring.
MemCPUAllocator Allocates CPU and memory for IBM AIX dedicated partitions.
LPAR Brings logical partitions (LPARs) online, takes them offline, and monitors
their status. The LPAR agent controls the LPAR by contacting the
Hardware Management Console (HMC). Communication between HMC
and the LPAR running the LPAR agent is through passwordless ssh.
Table 6-5 VCS infrastructure and support agents and their description
Agent Description
NotifierMngr Starts, stops, and monitors a notifier process, making it highly available.
The notifier process manages the reception of messages from VCS
and the delivery of those messages to SNMP consoles and SMTP
servers. The NotifierMngr resource can depend on the NIC resource.
Phantom Enables VCS to determine the state of parallel service groups that do
not include OnOff resources. No dependencies exist for the Phantom
resource.
Agent Description
ElifNone Monitors a file and checks for the file’s absence. You can use the
ElifNone agent to test service group behavior. No dependencies exist
for the ElifNone resource.
Configuring applications and resources in VCS 174
Configuring NFS service groups
Agent Description
FileNone Monitors a file and checks for the file’s existence. You can use the
FileNone agent to test service group behavior. No dependencies exist
for the FileNone resource.
FileOnOff Creates, removes, and monitors files. You can use the FileOnOff agent
to test service group behavior. No dependencies exist for the FileOnOff
resource.
FileOnOnly Creates and monitors files but does not remove files. You can use the
FileOnOnly agent to test service group behavior. No dependencies
exist for the FileOnOnly resource.
About NFS
Network File System (NFS) allows network users to access shared files stored on
an NFS server. NFS lets users manipulate shared files transparently as if the files
were on a local disk.
NFS terminology
Key terms used in NFS operations include:
NFS Server The computer that makes the local file system accessible to users
on the network.
NFS Client The computer which accesses the file system that is made available
by the NFS server.
rpc.mountd A daemon that runs on NFS servers. It handles initial requests from
NFS clients. NFS clients use the mount command to make requests.
On the server side, it receives lock requests from the NFS client and
passes the requests to the kernel-based nfsd.
On the client side, it forwards the NFS lock requests from users to
the rpc.lockd/lockd on the NFS server.
rpc.idmapd/nfsmapid A userland daemon that maps the NFSv4 username and group to
the local username and group of the system. This daemon is specific
to NFSv4.
Use this configuration to export all the local directories from a single virtual IP
address. In this configuration, the NFS resource is part of a failover service
group and there is only one NFS related service group in the entire clustered
environment. This configuration supports lock recovery and also handles potential
NFS ACK storms. This configuration also supports NFSv4. See “Configuring
for a single NFS environment” on page 176.
■ Configuring for a multiple NFS environment
Use this configuration to export the NFS shares from a multiple virtual IP
addresses. You need to create different NFS share service groups, where each
service group has one virtual IP address. Note that NFS is now a part of a
different parallel service group. This configuration supports lock recovery and
also prevents potential NFS ACK storms. This configuration also supports NFSv4.
See “Configuring for a multiple NFS environment” on page 178.
■ Configuring for multiple NFS environment with separate storage
Use this configuration to put all the storage resources into a separate service
group. The storage resources such as Mount and DiskGroup are part of different
service group. In this configuration, the NFS share service group depends on
the storage service group. SFCFSHA uses this configuration where the service
group containing the storage resources is a parallel service group. See
“Configuring NFS with separate storage” on page 180.
■ Configuring NFS services in a parallel service group
Use this configuration when you want only the NFS service to run. If you want
any of the functionality provided by the NFSRestart agent, do not use this
configuration. This configuration has some disadvantages because it does not
support NFS lock recovery and it does not prevent potential NFS ACK storms.
Veritas does not recommend this configuration. See “Configuring all NFS services
in a parallel service group” on page 181.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend
to use NFSv4.
2 If you configure the backing store for the NFS exports using VxVM, create
DiskGroup and Mount resources for the mount point that you want to export.
If you configure the backing store for the NFS exports using LVM, configure
the LVMVG resource and Mount resource for the mount point that you want
to export.
Refer to Storage agents chapter in the Cluster Server Bundled Agents
Reference Guide for details.
3 Create an NFSRestart resource. Set the Lower attribute of this NFSRestart
resource to 1. Ensure that NFSRes attribute points to the NFS resource that
is on the system.
For NFS lock recovery, make sure that the NFSLockFailover attribute and the
LocksPathName attribute have appropriate values. The NFSRestart resource
depends on the Mount and NFS resources that you have configured for this
service group.
Note: The NFSRestart resource gets rid of preonline and postoffline triggers
for NFS.
4 Create a Share resource. Set the PathName to the mount point that you want
to export. In case of multiple shares, create multiple Share resources with
different values for their PathName attributes. All the Share resources
configured in the service group should have dependency on the NFSRestart
resource with a value of 1 for the Lower attribute.
5 Create an IP resource. The value of the Address attribute for this IP resource
is used to mount the NFS exports on the client systems. Make the IP resource
depend on the Share resources that are configured in the service group.
Configuring applications and resources in VCS 178
Configuring NFS service groups
6 Create a DNS resource if you want NFS lock recovery. The DNS resource
depends on the IP resource. Refer to the sample configuration on how to
configure the DNS resource.
7 Create an NFSRestart resource. Set the NFSRes attribute to the NFS resource
(nfs) that is configured on the system. Set the Lower attribute of this NFSRestart
resource to 0. Make the NFSRestart resource depend on the IP resource or
the DNS resource (if you want to use NFS lock recovery.)
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend
to use NFSv4.
5 You must create a Phantom resource in this service group to display the correct
state of the service group.
Configuring applications and resources in VCS 179
Configuring NFS service groups
Creating the NFS exports service group for a multiple NFS environment
This service group contains the Share and IP resources for exports. The value for
the PathName attribute for the Share resource must be on shared storage and it
must be visible to all nodes in the cluster.
To create the NFS exports service group
1 Create an NFS Proxy resource inside the service group. This Proxy resource
points to the actual NFS resource that is configured on the system.
2 If you configure the backing store for the NFS exports with VxVM, create
DiskGroup and Mount resources for the mount point that you want to export.
If the backing store for the NFS exports is configured using LVM, configure the
LVMVG resource and Mount resources for the mount points that you want to
export.
Refer to Storage agents in the Cluster ServerBundled Agents Reference Guide
for details.
3 Create an NFSRestart resource. Set the Lower attribute of this NFSRestart
resource to 1. Ensure that NFSRes attribute points to the NFS resource
configured on the system.
For NFS lock recovery, make sure that the NFSLockFailover attribute and the
LocksPathName attribute have appropriate values. The NFSRestart resource
depends on the Mount resources that you have configured for this service
group. The NFSRestart resource gets rid of preonline and postoffline triggers
for NFS.
4 Create a Share resource. Set the PathName attribute to the mount point that
you want to export. In case of multiple shares, create multiple Share resources
with different values for their PathName attributes. All the Share resources that
are configured in the service group need to have dependency on the
NFSRestart resource that has a value of 1 for its Lower attribute.
5 Create an IP resource. The value of the Address attribute for this IP resource
is used to mount the NFS exports on the client systems. Make the IP resource
depend on the Share resources that are configured in the service group.
6 Create a DNS resource if you want NFS lock recovery. The DNS resource
depends on the IP resource. Refer to the sample configuration on how to
configure the DNS resource.
7 Create an NFSRestart resource. Set the NFSRes attribute to the NFS resource
(nfs) that is configured on the system. Set the value of the Lower attribute for
this NFSRestart resource to 0. Make the NFSRestart resource depend on the
IP resource or the DNS resource to use NFS lock recovery.
Configuring applications and resources in VCS 180
Configuring NFS service groups
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend
to use NFSv4.
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
Sample configurations
The following are the sample configurations for some of the supported NFS
configurations.
■ See “Sample configuration for a single NFS environment without lock recovery”
on page 183.
■ See “Sample configuration for a single NFS environment with lock recovery”
on page 185.
■ See “Sample configuration for a single NFSv4 environment” on page 188.
■ See “Sample configuration for a multiple NFSv4 environment” on page 190.
■ See “Sample configuration for a multiple NFS environment without lock recovery”
on page 193.
■ See “Sample configuration for a multiple NFS environment with lock recovery”
on page 195.
■ See “Sample configuration for configuring NFS with separate storage”
on page 199.
■ See “Sample configuration when configuring all NFS services in a parallel service
group” on page 201.
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Configuring applications and resources in VCS 184
Configuring NFS service groups
IP ip_sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
NFS nfs (
Nservers = 16
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Configuring applications and resources in VCS 185
Configuring NFS service groups
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
NFSRestart_sg11_L requires nfs
NFSRestart_sg11_L requires vcs_dg1_r01_2
NFSRestart_sg11_L requires vcs_dg1_r0_1
NFSRestart_sg11_U requires ip_sys1
ip_sys1 requires nic_sg11_en0
ip_sys1 requires share_dg1_r01_2
ip_sys1 requires share_dg1_r0_1
share_dg1_r01_2 requires NFSRestart_sg11_L
share_dg1_r0_1 requires NFSRestart_sg11_L
vcs_dg1_r01_2 requires vol_dg1_r01_2
vcs_dg1_r0_1 requires vol_dg1_r0_1
vol_dg1_r01_2 requires vcs_dg1
vol_dg1_r0_1 requires vcs_dg1
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
Configuring applications and resources in VCS 186
Configuring NFS service groups
system sys2 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
IP ip_sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Configuring applications and resources in VCS 187
Configuring NFS service groups
NFS nfs (
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
NFS nfs (
NFSv4Root = "/"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
NetworkType = ether
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Configuring applications and resources in VCS 190
Configuring NFS service groups
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
NFSRestart_sg11_L requires nfs
NFSRestart_sg11_L requires vcs_dg1_r01_2
NFSRestart_sg11_L requires vcs_dg1_r0_1
NFSRestart_sg11_U requires dns_11
dns_11 requires ip_sys1
ip_sys1 requires nic_sg11_en0
ip_sys1 requires share_dg1_r01_2
ip_sys1 requires share_dg1_r0_1
share_dg1_r01_2 requires NFSRestart_sg11_L
share_dg1_r0_1 requires NFSRestart_sg11_L
vcs_dg1_r01_2 requires vol_dg1_r01_2
vcs_dg1_r0_1 requires vol_dg1_r0_1
vol_dg1_r01_2 requires vcs_dg1
vol_dg1_r0_1 requires vcs_dg1
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
Configuring applications and resources in VCS 191
Configuring NFS service groups
NFS n1 (
Nservers = 6
NFSv4Root = "/"
)
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
IP ip_sys2 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Configuring applications and resources in VCS 192
Configuring NFS service groups
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
requires group nfs_sg online local firm
NFSRestart_sg11_L requires p11
NFSRestart_sg11_L requires vcs_dg1_r01_2
NFSRestart_sg11_L requires vcs_dg1_r0_1
NFSRestart_sg11_U requires dns_11
dns_11 requires ip_sys1
ip_sys1 requires nic_sg11_en0
ip_sys1 requires share_dg1_r01_2
Configuring applications and resources in VCS 193
Configuring NFS service groups
include "types.cf"
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nservers = 6
)
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
Configuring applications and resources in VCS 194
Configuring NFS service groups
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
IP ip_sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
Configuring applications and resources in VCS 195
Configuring NFS service groups
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
include "types.cf"
Configuring applications and resources in VCS 196
Configuring NFS service groups
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nservers = 6
)
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
Configuring applications and resources in VCS 197
Configuring NFS service groups
OffDelRR = 1
)
IP ip_sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
Configuring applications and resources in VCS 198
Configuring NFS service groups
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nservers = 6
)
Phantom ph1 (
)
group sg11storage (
SystemList = { sys1 = 0, sys2 = 1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Configuring applications and resources in VCS 200
Configuring NFS service groups
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
IP sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
)
Configuring applications and resources in VCS 201
Configuring NFS service groups
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
Configuring applications and resources in VCS 202
Configuring NFS service groups
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nservers = 6
)
NFSRestart nfsrestart (
NFSRes = n1
Lower = 2
)
nfsrestart requires n1
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
IP ip_sys1 (
Device @sys1 = en0
Device @sys2 = en0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
NIC nic_sg11_en0 (
Device @sys1 = en0
Device @sys2 = en0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Configuring applications and resources in VCS 203
Configuring NFS service groups
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
group sg11storage (
SystemList = { sys1 = 0, sys2 = 1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Configuring applications and resources in VCS 204
About configuring the RemoteGroup agent
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
For information about Virtual Business Services, see the Virtual Business
Service–Availability User's Guide.
In case of one-to-one mapping, set the value of the AutoFailOver attribute of the
remote service group to 0. This avoids unnecessary onlining or offlining of the
remote service group.
Note: When you set the value of ControlMode to OnlineOnly or to MonitorOnly, the
recommend value of the VCSSysName attribute of the RemoteGroup resource is
ANY. If you want one-to-one mapping between the local nodes and the remote
nodes, then a switch or fail over of local service group is impossible. It is important
to note that in both these configurations the RemoteGroup agent does not take the
remote service group offline.
These values are not mutually exclusive and can be used in combination with one
another. You must set the IntentionalOffline attribute of RemoteGroup resource to
1 for the ReturnIntOffline attribute to work.
Note: If the remote cluster runs in secure mode, you must set the value for
DomainType or BrokerIp attributes.
■ The RemoteGroup resource continues to monitor the remote service group even
when the resource is offline.
■ The RemoteGroup resource does not take the remote service group offline if
the resource is online anywhere in the cluster.
■ After an agent restarts, the RemoteGroup resource does not return offline if the
resource is online on another cluster node.
■ The RemoteGroup resource takes the remote service group offline if it is the
only instance of RemoteGroup resource online in the cluster.
■ An attempt to bring a RemoteGroup resource online has no effect if the same
resource instance is online on another node in the cluster.
include "types.cf"
cluster clus1 (
)
system sys1(
)
system sys2(
)
group smbserver (
SystemList = { sys1= 0, sys2= 1 }
)
IP ip (
Configuring applications and resources in VCS 210
Configuring the Coordination Point agent
Device = en0
Address = "10.209.114.201"
NetMask = "255.255.252.0"
)
NIC nic (
Device = en0
NetworkHosts = { "10.209.74.43" }
)
NetBios nmb (
SambaServerRes = smb
NetBiosName = smb_vcs
Interfaces = { "10.209.114.201" }
)
SambaServer smb (
ConfFile = "/etc/samba/smb.conf"
LockDir = "/var/run"
SambaTopDir = "/usr"
)
SambaShare smb_share (
SambaServerRes = smb
ShareName = share1
ShareOptions = "path = /samba_share/; public = yes;
writable = yes"
)
ip requires nic
nmb requires smb
smb requires ip
smb_share requires nmb
See the Cluster Server Bundled Agents Reference Guide for more information on
the agent.
See the Cluster Server Installation Guide for instructions to configure the agent.
AGENT COMMAND-LINE
UTILITIES
AGENT GUI
AGENT-SPECIFIC CODE
AGENT FRAMEWORK
Status Control
HAD
The agent uses the agent framework, which is compiled into the agent itself. For
each resource type configured in a cluster, an agent runs on each cluster system.
The agent handles all resources of that type. The engine passes commands to the
agent and the agent returns the status of command execution. For example, an
agent is commanded to bring a resource online. The agent responds back with the
success (or failure) of the operation. Once the resource is online, the agent
communicates with the engine only if this status changes.
daemon operates as a replicated state machine, which means all systems in the
cluster have a synchronized state of the cluster configuration. This is accomplished
by the following:
■ All systems run an identical version of HAD.
■ HAD on each system maintains the state of its own resources, and sends all
cluster information about the local system to all other machines in the cluster.
■ HAD on each system receives information from the other cluster systems to
update its own view of the cluster.
■ Each system follows the same code path for actions on the cluster.
The replicated state machine communicates over a purpose-built communications
package consisting of two components, Group Membership Services/Atomic
Broadcast (GAB) and Low Latency Transport (LLT).
See “About Group Membership Services/Atomic Broadcast (GAB)” on page 216.
See “About Low Latency Transport (LLT)” on page 217.
Figure 7-3 illustrates the overall communications paths between two systems of
the replicated state machine model.
GAB. GAB marks the peer as DOWN and excludes it from the cluster. In most
configurations, membership arbitration is used to prevent network partitions.
■ Cluster communications
GAB’s second function is reliable cluster communications. GAB provides ordered
guaranteed delivery of messages to all cluster systems. The Atomic Broadcast
functionality is used by HAD to ensure that all systems within the cluster receive
all configuration change messages, or are rolled back to the previous state,
much like a database atomic commit. While the communications function in
GAB is known as Atomic Broadcast, no actual network broadcast traffic is
generated. An Atomic Broadcast message is a series of point to point unicast
messages from the sending system to each receiving system, with a
corresponding acknowledgement from each receiving system.
■ GAB receives the status of heartbeat from all cluster systems from LLT and
makes membership determination based on this information.
Figure 7-4 shows heartbeat in the cluster.
LLT can be configured to designate specific cluster interconnect links as either high
priority or low priority. High priority links are used for cluster communications to
GAB as well as heartbeat signals. Low priority links, during normal operation, are
used for heartbeat and link state maintenance only, and the frequency of heartbeats
is reduced to 50% of normal to reduce network overhead.
If there is a failure of all configured high priority links, LLT will switch all cluster
communications traffic to the first available low priority link. Communication traffic
will revert back to the high priority links as soon as they become available.
While not required, best practice recommends to configure at least one low priority
link, and to configure two high priority links on dedicated cluster interconnects to
provide redundancy in the communications path. Low priority links are typically
configured on the public or administrative network.
If you use different media speed for the private NICs, Veritas recommends that you
configure the NICs with lesser speed as low-priority links to enhance LLT
performance. With this setting, LLT does active-passive load balancing across the
private links. At the time of configuration and failover, LLT automatically chooses
the link with high-priority as the active link and uses the low-priority links only when
a high-priority link fails.
LLT sends packets on all the configured links in weighted round-robin manner. LLT
uses the linkburst parameter which represents the number of back-to-back packets
that LLT sends on a link before the next link is chosen. In addition to the default
weighted round-robin based load balancing, LLT also provides destination-based
load balancing. LLT implements destination-based load balancing where the LLT
About communications, membership, and data protection in the cluster 219
About cluster membership
link is chosen based on the destination node id and the port. With destination-based
load balancing, LLT sends all the packets of a particular destination on a link.
However, a potential problem with the destination-based load balancing approach
is that LLT may not fully utilize the available links if the ports have dissimilar traffic.
Veritas recommends destination-based load balancing when the setup has more
than two cluster nodes and more active LLT ports. You must manually configure
destination-based load balancing for your cluster to set up the port to LLT link
mapping.
See “Configuring destination-based load balancing for LLT” on page 99.
LLT on startup sends broadcast packets with LLT node id and cluster id information
onto the LAN to discover any node in the network that has same node id and cluster
id pair. Each node in the network replies to this broadcast message with its cluster
id, node id, and node name.
LLT on the original node does not start and gives appropriate error in the following
cases:
■ LLT on any other node in the same network is running with the same node id
and cluster id pair that it owns.
■ LLT on the original node receives response from a node that does not have a
node name entry in the /etc/llthosts file.
/sbin/gabconfig -c -nN
where the variable # is replaced with the number of systems in the cluster.
Note: Veritas recommends that you replace # with the exact number of nodes
in the cluster.
■ When GAB on each system detects that the correct number of systems are
running, based on the number declared in /etc/gabtab and input from LLT, it
will seed.
■ If you have I/O fencing enabled in your cluster and if you have set the GAB
auto-seeding feature through I/O fencing, GAB automatically seeds the cluster
even when some cluster nodes are unavailable.
See “Seeding a cluster using the GAB auto-seed parameter through I/O fencing”
on page 220.
■ HAD will start on each seeded system. HAD will only run on a system that has
seeded.
HAD can provide the HA functionality only when GAB has seeded.
See “Manual seeding of a cluster” on page 221.
However, if you have enabled I/O fencing in the cluster, then I/O fencing can handle
any preexisting split-brain in the cluster. You can configure I/O fencing in such a
way for GAB to automatically seed the cluster. The behavior is as follows:
■ If a number of nodes in a cluster are not up, GAB port (port a) still comes up in
all the member-nodes in the cluster.
■ If the coordination points do not have keys from any non-member nodes, I/O
fencing (GAB port b) also comes up.
This feature is disabled by default. You must configure the autoseed_gab_timeout
parameter in the /etc/vxfenmode file to enable the automatic seeding feature of
GAB.
See “About I/O fencing configuration files” on page 244.
To enable GAB auto-seeding parameter of I/O fencing
◆ Set the value of the autoseed_gab_timeout parameter in the /etc/vxfenmode
file to 0 to turn on the feature.
To delay the GAB auto-seed feature, you can also set a value greater than
zero. GAB uses this value to delay auto-seed of the cluster for the given number
of seconds.
To disable GAB auto-seeding parameter of I/O fencing
◆ Set the value of the autoseed_gab_timeout parameter in the /etc/vxfenmode
file to -1 to turn off the feature.
You can also remove the line from the /etc/vxfenmode file.
Note:
If you have I/O fencing enabled in your cluster, you can set the GAB auto-seeding
feature through I/O fencing so that GAB automatically seeds the cluster even when
some cluster nodes are unavailable.
See “Seeding a cluster using the GAB auto-seed parameter through I/O fencing”
on page 220.
Before manually seeding the cluster, check that systems that will join the cluster
are able to send and receive heartbeats to each other. Confirm there is no possibility
of a network partition condition in the cluster.
Before manually seeding the cluster, do the following:
■ Check that systems that will join the cluster are able to send and receive
heartbeats to each other.
■ Confirm there is no possibility of a network partition condition in the cluster.
To manually seed the cluster, type the following command:
/sbin/gabconfig -x
Note there is no declaration of the number of systems in the cluster with a manual
seed. This command will seed all systems in communication with the system where
the command is run.
So, make sure not to run this command in more than one node in the cluster.
See “Seeding and I/O fencing” on page 594.
■ When LLT informs GAB of a heartbeat loss, the systems that are remaining in
the cluster coordinate to agree which systems are still actively participating in
the cluster and which are not. This happens during a time period known as GAB
Stable Timeout (5 seconds).
About communications, membership, and data protection in the cluster 223
About membership arbitration
VCS has specific error handling that takes effect in the case where the systems
do not agree.
■ GAB marks the system as DOWN, excludes the system from the cluster
membership, and delivers the membership change to the fencing module.
■ The fencing module performs membership arbitration to ensure that there is not
a split brain situation and only one functional cohesive cluster continues to run.
The fencing module is turned on by default.
Review the details on actions that occur if the fencing module has been deactivated:
See “About cluster membership and data protection without I/O fencing” on page 255.
Note: Typically, a fencing configuration for a cluster must have three coordination
points. Veritas Technologies also supports server-based fencing with a single CP
server as its only coordination point with a caveat that this CP server becomes a
single point of failure.
Note: The dmp disk policy for I/O fencing supports both single and multiple
hardware paths from a node to the coordinator disks. If few coordinator disks
have multiple hardware paths and few have a single hardware path, then we
support only the dmp disk policy. For new installations, Veritas Technologies
only supports dmp disk policy for IO fencing even for a single hardware path.
About communications, membership, and data protection in the cluster 225
About membership arbitration
Note: With the CP server, the fencing arbitration logic still remains on the VCS
cluster.
Coordinator Disks
■ During the I/O fencing race, if the RACER node panics or if it cannot reach the
coordination points, then the VxFEN RACER node re-election feature allows an
alternate node in the subcluster that has the next lowest node ID to take over
as the RACER node.
The racer re-election works as follows:
About communications, membership, and data protection in the cluster 228
About membership arbitration
■ In the event of an unexpected panic of the RACER node, the VxFEN driver
initiates a racer re-election.
■ If the RACER node is unable to reach a majority of coordination points, then
the VxFEN module sends a RELAY_RACE message to the other nodes in
the subcluster. The VxFEN module then re-elects the next lowest node ID
as the new RACER.
■ With successive re-elections if no more nodes are available to be re-elected
as the RACER node, then all the nodes in the subcluster will panic.
■ The race consists of executing a preempt and abort command for each key of
each system that appears to no longer be in the GAB membership.
The preempt and abort command allows only a registered system with a valid
key to eject the key of another system. This ensures that even when multiple
systems attempt to eject other, each race will have only one winner. The first
system to issue a preempt and abort command will win and eject the key of the
other system. When the second system issues a preempt and abort command,
it cannot perform the key eject because it is no longer a registered system with
a valid key.
If the value of the cluster-level attribute PreferredFencingPolicy is System,
Group, or Site then at the time of a race, the VxFEN Racer node adds up the
weights for all nodes in the local subcluster and in the leaving subcluster. If the
leaving partition has a higher sum (of node weights) then the racer for this
partition will delay the race for the coordination point. This effectively gives a
preference to the more critical subcluster to win the race. If the value of the
cluster-level attribute PreferredFencingPolicy is Disabled, then the delay will be
calculated, based on the sums of node counts.
See “About preferred fencing” on page 225.
■ If the preempt and abort command returns success, that system has won the
race for that coordinator disk.
Each system will repeat this race to all the coordinator disks. The race is won
by, and control is attained by, the system that ejects the other system’s
registration keys from a majority of the coordinator disks.
■ On the system that wins the race, the vxfen module informs all the systems that
it was racing on behalf of that it won the race, and that subcluster is still valid.
■ On the system(s) that do not win the race, the vxfen module will trigger a system
panic. The other systems in this subcluster will note the panic, determine they
lost control of the coordinator disks, and also panic and restart.
■ Upon restart, the systems will attempt to seed into the cluster.
■ If the systems that restart can exchange heartbeat with the number of cluster
systems declared in /etc/gabtab, they will automatically seed and continue
About communications, membership, and data protection in the cluster 229
About membership arbitration
to join the cluster. Their keys will be replaced on the coordinator disks. This
case will only happen if the original reason for the membership change has
cleared during the restart.
■ If the systems that restart cannot exchange heartbeat with the number of
cluster systems declared in /etc/gabtab, they will not automatically seed, and
HAD will not start. This is a possible split brain condition, and requires
administrative intervention.
■ If you have I/O fencing enabled in your cluster and if you have set the GAB
auto-seeding feature through I/O fencing, GAB automatically seeds the
cluster even when some cluster nodes are unavailable.
See “Seeding a cluster using the GAB auto-seed parameter through I/O
fencing” on page 220.
Note: Forcing a manual seed at this point will allow the cluster to seed. However,
when the fencing module checks the GAB membership against the systems
that have keys on the coordinator disks, a mismatch will occur. vxfen will detect
a possible split brain condition, print a warning, and will not start. In turn, HAD
will not start. Administrative intervention is required.
See “Manual seeding of a cluster” on page 221.
applications, system capacity, and site preference details that you provide using
specific VCS attributes, and passes to the fencing driver to influence the result of
race for coordination points. At the time of a race, the racer node adds up the
weights for all nodes in the local subcluster and in the leaving subcluster. If the
leaving subcluster has a higher sum (of node weights) then the racer for this
subcluster delays the race for the coordination points. Thus, the subcluster that has
critical systems or critical applications wins the race.
The preferred fencing feature uses the cluster-level attribute PreferredFencingPolicy
that takes the following race policy values:
■ Disabled (default): Preferred fencing is disabled.
When the PreferredFencingPolicy attribute value is set as Disabled, VCS sets
the count based race policy and resets the value of node weight as 0.
■ System: Based on the capacity of the systems in a subcluster.
If one system is more powerful than others in terms of architecture, number of
CPUs, or memory, this system is given preference in the fencing race.
When the PreferredFencingPolicy attribute value is set as System, VCS
calculates node weight based on the system-level attribute FencingWeight.
See System attributes on page 707.
■ Group: Based on the higher priority applications in a subcluster.
The fencing driver takes into account the service groups that are online on the
nodes in any subcluster.
In the event of a network partition, I/O fencing determines whether the VCS
engine is running on all the nodes that participate in the race for coordination
points. If VCS engine is running on all the nodes, the node with higher priority
service groups is given preference during the fencing race.
However, if the VCS engine instance on a node with higher priority service
groups is killed for some reason, I/O fencing resets the preferred fencing node
weight for that node to zero. I/O fencing does not prefer that node for membership
arbitration. Instead, I/O fencing prefers a node that has an instance of VCS
engine running on it even if the node has lesser priority service groups.
Without synchronization between VCS engine and I/O fencing, a node with high
priority service groups but without VCS engine running on it may win the race.
Such a situation means that the service groups on the loser node cannot failover
to the surviving node.
When the PreferredFencingPolicy attribute value is set as Group, VCS calculates
node weight based on the group-level attribute Priority for those service groups
that are active.
See “Service group attributes” on page 680.
■ Site: Based on the priority assigned to sites in a subcluster.
About communications, membership, and data protection in the cluster 231
About membership arbitration
The Site policy is available only if you set the cluster attribute SiteAware to 1.
VCS sets higher weights to the nodes in a higher priority site and lesser weights
to the nodes in a lower priority site. The site with highest cumulative node weight
becomes the preferred site. In a network partition between sites, VCS prefers
the subcluster with nodes from the preferred site in the race for coordination
points.
See “Site attributes” on page 742.
See “Enabling or disabling the preferred fencing policy” on page 321.
cpsadm
vxfenadm
Customized
Scripts
User space
Client cluster node
vxfend
VXFEN
Kernel space
GAB
LLT
A user level daemon vxfend interacts with the vxfen driver, which in turn interacts
with GAB to get the node membership update. Upon receiving membership updates,
vxfend invokes various scripts to race for the coordination point and fence off data
disks. The vxfend daemon manages various fencing agents. The customized fencing
scripts are located in the /opt/VRTSvcs/vxfen/bin/customized/cps directory.
The scripts that are involved include the following:
■ generate_snapshot.sh : Retrieves the SCSI ID’s of the coordinator disks and/or
UUID ID's of the CP servers
CP server uses the UUID stored in /etc/VRTScps/db/current/cps_uuid.
See “About the cluster UUID” on page 34.
■ join_local_node.sh: Registers the keys with the coordinator disks or CP servers
About communications, membership, and data protection in the cluster 233
About membership arbitration
Site 1 Site 2
Node 1 Node 2
Public
Network
SAN
Coordinator disk #1
Coordinator disk #3
Coordinator disk #2
1. Odd number of cluster nodes in the current membership: One sub-cluster gets
majority upon a network split.
2. Even number of cluster nodes in the current membership: In case of an even
network split, both the sub-clusters have equal number of nodes. The partition
with the leader node is treated as majority and that partition survives.
In case of an uneven network split, such that one sub-cluster has more number
of nodes than other sub-clusters, the majority sub-cluster gets majority and
survives.
Note: Veritas recommends that you do not run any other applications on the single
node or SFHA cluster that is used to host CP server.
Warning: The CP server database must not be edited directly and should only be
accessed using cpsadm(1M). Manipulating the database manually may lead to
undesirable results including system panics.
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications)
Figure 7-9 Single CP server with two coordinator disks for each application
cluster
Fibre channel
application clusters
Fibre channel
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications) Public network
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to provide high availability for applications)
About communications, membership, and data protection in the cluster 239
About membership arbitration
Figure 7-11 CPSSG group when CP server hosted on a single node VCS
cluster
vxcpserv
quorum
cpsvip
cpsnic
Figure 7-12 displays a schematic of the CPSSG group and its dependencies when
the CP server is hosted on an SFHA cluster.
About communications, membership, and data protection in the cluster 240
About membership arbitration
Figure 7-12 CPSSG group when the CP server is hosted on an SFHA cluster
vxcpserv
quorum cpsmount
cpsvip cpsvol
cpsnic cpsdg
type Quorum (
static str ArgList[] = { QuorumResources, Quorum, State }
str QuorumResources[]
About communications, membership, and data protection in the cluster 241
About membership arbitration
int Quorum = 1
)
Note: For secure communication using HTTPS, you do not need to establish
trust between the CP server and the application cluster.
The signed client certificate is used to establish the identity of the client. Once
the CP server authenticates the client, the client can issue the operational
commands that are limited to its own cluster.
■ Getting credentials from authentication broker:
The cpsadm command tries to get the existing credentials that are present on
the local node. The installer generates these credentials during fencing
configuration.
The vxcpserv process tries to get the existing credentials that are present on
the local node. The installer generates these credentials when it enables security.
■ Communication between CP server and VCS cluster nodes:
After the CP server establishes its credential and is up, it becomes ready to
receive data from the clients. After the cpsadm command obtains its credentials
and authenticates CP server credentials, cpsadm connects to the CP server.
Data is passed over to the CP server.
About communications, membership, and data protection in the cluster 243
About data protection
■ Validation:
On receiving data from a particular VCS cluster node, vxcpserv validates its
credentials. If validation fails, then the connection request data is rejected.
Note: Use of SCSI 3 PR protects against all elements in the IT environment that
might be trying to write illegally to storage, not only VCS related elements.
File Description
/etc/default/vxfen This file stores the start and stop environment variables for I/O fencing:
■ VXFEN_START—Defines the startup behavior for the I/O fencing module after a system
reboot. Valid values include:
1—Indicates that I/O fencing is enabled to start up.
0—Indicates that I/O fencing is disabled to start up.
■ VXFEN_STOP—Defines the shutdown behavior for the I/O fencing module during a system
shutdown. Valid values include:
1—Indicates that I/O fencing is enabled to shut down.
0—Indicates that I/O fencing is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration.
If you manually configured VCS, you must make sure to set the values of these environment
variables to 1.
This file is not applicable for server-based fencing and majority-based fencing.
About communications, membership, and data protection in the cluster 245
About I/O fencing configuration files
File Description
■ vxfen_mode
■ scsi3—For disk-based fencing.
■ customized—For server-based fencing.
■ disabled—To run the I/O fencing driver but not do any fencing operations.
■ majority— For fencing without the use of coordination points.
■ vxfen_mechanism
This parameter is applicable only for server-based fencing. Set the value as cps.
■ scsi3_disk_policy
■ dmp—Configure the vxfen module to use DMP devices
The disk policy is dmp by default. If you use iSCSI devices, you must set the disk policy
as dmp.
Note: You must use the same SCSI-3 disk policy on all the nodes.
■ List of coordination points
This list is required only for server-based fencing configuration.
Coordination points in server-based fencing can include coordinator disks, CP servers, or
both. If you use coordinator disks, you must create a coordinator disk group containing the
individual coordinator disks.
Refer to the sample file /etc/vxfen.d/vxfenmode_cps for more information on how to specify
the coordination points and multiple IP addresses for each CP server.
■ single_cp
This parameter is applicable for server-based fencing which uses a single highly available
CP server as its coordination point. Also applicable for when you use a coordinator disk
group with single disk.
■ autoseed_gab_timeout
This parameter enables GAB automatic seeding of the cluster even when some cluster
nodes are unavailable.
This feature is applicable for I/O fencing in SCSI3 and customized mode.
0—Turns the GAB auto-seed feature on. Any value greater than 0 indicates the number of
seconds that GAB must delay before it automatically seeds the cluster.
-1—Turns the GAB auto-seed feature off. This setting is the default.
About communications, membership, and data protection in the cluster 246
Examples of VCS operation with I/O fencing
File Description
/etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node.
The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a
system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the
coordinator points.
Note: The /etc/vxfentab file is a generated file; do not modify this file.
For disk-based I/O fencing, the /etc/vxfentab file on each node contains a list of all paths to
each coordinator disk along with its unique disk identifier. A space separates the path and the
unique disk identifier. An example of the /etc/vxfentab file in a disk-based fencing configuration
on one node resembles as follows:
■ DMP disk:
/dev/vx/rdmp/rhdisk75 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006804E795D075
/dev/vx/rdmp/rhdisk76 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006814E795D076
/dev/vx/rdmp/rhdisk77 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006824E795D077
For server-based fencing, the /etc/vxfentab file also includes the security settings information.
For server-based fencing with single CP server, the /etc/vxfentab file also includes the single_cp
settings information.
■ GAB passes the membership change to the fencing module on each system in
the cluster.
The only system that is still running is System0
■ System0 gains control of the coordinator disks by ejecting the key registered
by System1 from each coordinator disk.
The ejection takes place one by one, in the order of the coordinator disk’s serial
number.
■ When the fencing module on System0 successfully controls the coordinator
disks, HAD carries out any associated policy connected with the membership
change.
■ System1 is blocked access to the shared storage, if this shared storage was
configured in a service group that was now taken over by System0 and imported.
system0 system1
Coordinator Disks
Coordinator Disks
■ After LLT informs GAB of a heartbeat loss, the systems that are remaining do
a "GAB Stable Timeout” (5 seconds). In this example:
■ System0 and System1 agree that both of them do not see System2 and
System3
■ System2 and System3 agree that both of them do not see System0 and
System1
■ GAB marks the system as DOWN, and excludes the system from the cluster
membership. In this example:
■ GAB on System0 and System1 mark System2 and System3 as DOWN and
excludes them from cluster membership.
■ GAB on System2 and System3 mark System0 and System1 as DOWN and
excludes them from cluster membership.
About communications, membership, and data protection in the cluster 250
Examples of VCS operation with I/O fencing
■ GAB on each of the four systems passes the membership change to the vxfen
driver for membership arbitration. Each subcluster races for control of the
coordinator disks. In this example:
■ System0 has the lower LLT ID, and races on behalf of itself and System1.
■ System2 has the lower LLT ID, and races on behalf of itself and System3.
■ GAB on each of the four systems also passes the membership change to HAD.
HAD waits for the result of the membership arbitration from the fencing module
before taking any further action.
■ If System0 is not able to reach a majority of the coordination points, then the
VxFEN driver will initiate a racer re-election from System0 to System1 and
System1 will initiate the race for the coordination points.
■ Assume System0 wins the race for the coordinator disks, and ejects the
registration keys of System2 and System3 off the disks. The result is as follows:
■ System0 wins the race for the coordinator disk. The fencing module on
System0 sends a WON_RACE to all other fencing modules in the current
cluster, in this case System0 and System1. On receiving a WON_RACE,
the fencing module on each system in turn communicates success to HAD.
System0 and System1 remain valid and current members of the cluster.
■ If System0 dies before it sends a WON_RACE to System1, then VxFEN will
initiate a racer re-election from System0 to System1 and System1 will initiate
the race for the coordination points.
System1 on winning a majority of the coordination points remains valid and
current member of the cluster and the fencing module on System1 in turn
communicates success to HAD.
■ System2 loses the race for control of the coordinator disks and the fencing
module on System 2 sends a LOST_RACE message. The fencing module
on System2 calls a kernel panic and the system restarts.
■ System3 sees another membership change from the kernel panic of System2.
Because that was the system that was racing for control of the coordinator
disks in this subcluster, System3 also panics.
■ HAD carries out any associated policy or recovery actions based on the
membership change.
■ System2 and System3 are blocked access to the shared storage (if the shared
storage was part of a service group that is now taken over by System0 or System
1).
■ To rejoin System2 and System3 to the cluster, the administrator must do the
following:
About communications, membership, and data protection in the cluster 251
Examples of VCS operation with I/O fencing
Both private networks Node A races for Node B races for When Node B is
fail. majority of majority of ejected from cluster,
coordination points. coordination points. repair the private
networks before
If Node A wins race If Node B loses the
attempting to bring
for coordination race for the
Node B back.
points, Node A ejects coordination points,
Node B from the Node B panics and
shared disks and removes itself from
continues. the cluster.
Both private networks Node A continues to Node B has crashed. Restart Node B after
function again after work. It cannot start the private networks are
event above. database since it is restored.
unable to write to the
data disks.
Nodes A and B and Node A restarts and Node B restarts and Resolve preexisting
private networks lose I/O fencing driver I/O fencing driver split-brain condition.
power. Coordination (vxfen) detects Node (vxfen) detects Node
See “Fencing startup
points and data disks B is registered with A is registered with
reports preexisting
retain power. coordination points. coordination points.
split-brain”
The driver does not The driver does not
Power returns to on page 617.
see Node B listed as see Node A listed as
nodes and they
member of cluster member of cluster
restart, but private
because private because private
networks still have no
networks are down. networks are down.
power.
This causes the I/O This causes the I/O
fencing device driver fencing device driver
to prevent Node A to prevent Node B
from joining the from joining the
cluster. Node A cluster. Node B
console displays: console displays:
Potentially a Potentially a
preexisting preexisting
split brain. split brain.
Dropping out Dropping out
of the cluster. of the cluster.
Refer to the Refer to the
user user
documentation documentation
for steps for steps
required required
to clear to clear
preexisting preexisting
split brain. split brain.
About communications, membership, and data protection in the cluster 254
Examples of VCS operation with I/O fencing
Node A crashes while Node A is crashed. Node B restarts and Resolve preexisting
Node B is down. detects Node A is split-brain condition.
Node B comes up registered with the
See “Fencing startup
and Node A is still coordination points.
reports preexisting
down. The driver does not
split-brain”
see Node A listed as
on page 617.
member of the
cluster. The I/O
fencing device driver
prints message on
console:
Potentially a
preexisting
split brain.
Dropping out
of the cluster.
Refer to the
user
documentation
for steps
required
to clear
preexisting
split brain.
The disk array Node A continues to Node B continues to Power on the failed
containing two of the operate as long as no operate as long as no disk array so that
three coordination nodes leave the nodes leave the subsequent network
points is powered off. cluster. cluster. partition does not
cause cluster
No node leaves the
shutdown, or replace
cluster membership
coordination points.
The disk array Node A continues to Node B has left the Power on the failed
containing two of the operate in the cluster. cluster. disk array so that
three coordination subsequent network
points is powered off. partition does not
cause cluster
Node B gracefully
shutdown, or replace
leaves the cluster and
coordination points.
the disk array is still
powered off. Leaving See “Replacing I/O
gracefully implies a fencing coordinator
clean shutdown so disks when the cluster
that vxfen is properly is online” on page 283.
unconfigured.
The disk array Node A races for a Node B has left Power on the failed
containing two of the majority of cluster due to crash disk array and restart
three coordination coordination points. or network partition. I/O fencing driver to
points is powered off. Node A fails because enable Node A to
only one of the three register with all
Node B abruptly
coordination points is coordination points,
crashes or a network
available. Node A or replace
partition occurs
panics and removes coordination points.
between node A and
itself from the cluster.
node B, and the disk See “Replacing
array is still powered defective disks when
off. the cluster is offline”
on page 620.
the system is not down, and HAD does not attempt to restart the services on
another system.
In order for this differentiation to have meaning, it is important to ensure the cluster
interconnect links do not have a single point of failure, such as a network switch or
Ethernet card.
About jeopardy
In all cases, when LLT on a system no longer receives heartbeat messages from
another system on any of the configured LLT interfaces, GAB reports a change in
membership.
When a system has only one interconnect link remaining to the cluster, GAB can
no longer reliably discriminate between loss of a system and loss of the network.
The reliability of the system’s membership is considered at risk. A special
membership category takes effect in this situation, called a jeopardy membership.
This provides the best possible split-brain protection without membership arbitration
and SCSI-3 capable devices.
When a system is placed in jeopardy membership status, two actions occur if the
system loses the last interconnect link:
■ VCS places service groups running on the system in autodisabled state. A
service group in autodisabled state may failover on a resource or group fault,
but cannot fail over on a system fault until the autodisabled flag is manually
cleared by the administrator.
■ VCS operates the system as a single system cluster. Other systems in the
cluster are partitioned off in a separate cluster membership.
You can use the GAB registration monitoring feature to detect DDNA conditions.
See “About GAB client registration monitoring” on page 552.
Public Network
Regular membership: 0, 1, 2, 3
Public Network
Membership: 0,1, 2, 3
Jeopardy membership: 2
The cluster is reconfigured. Systems 0, 1, and 3 are in the regular membership and
System2 in a jeopardy membership. Service groups on System2 are autodisabled.
All normal cluster operations continue, including normal failover of service groups
due to resource fault.
Public Network
Systems 0, 1, and 3 recognize that System2 has faulted. The cluster is reformed.
Systems 0, 1, and 3 are in a regular membership. When System2 went into jeopardy
membership, service groups running on System2 were autodisabled. Even though
the system is now completely failed, no other system can assume ownership of
these service groups unless the system administrator manually clears the
AutoDisabled flag on the service groups that were running on System2.
However, after the flag is cleared, these service groups can be manually brought
online on other systems in the cluster.
About communications, membership, and data protection in the cluster 259
Examples of VCS operation without I/O fencing
Public Network
Public Network
Public Network
Other systems send all cluster status traffic to System2 over the remaining private
link and use both private links for traffic between themselves. The low priority link
continues carrying the heartbeat signal only. No jeopardy condition is in effect
because two links remain to determine system failure.
Public Network
Systems 0, 1, and 3 recognize that System2 has faulted. The cluster is reformed.
Systems 0, 1, and 3 are in a regular membership. The service groups on System2
that are configured for failover on system fault are attempted to be brought online
on another target system, if one exists.
Public Network
Note: An exception to this is if the cluster uses fencing along with Cluster File
Systems (CFS) or Oracle Real Application Clusters (RAC).
The reason for this is that low priority links are usually shared public network
links. In the case where the main cluster interconnects fail, and the low priority
link was the only remaining link, large amounts of data would be moved to the
low priority link. This would potentially slow down the public network to
unacceptable performance. Without a low priority link configured, membership
arbitration would go into effect in this case, and some systems may be taken
down, but the remaining systems would continue to run without impact to the
public network.
It is not recommended to have a cluster with CFS or RAC without I/O fencing
configured.
■ Disable the console-abort sequence
Most UNIX systems provide a console-abort sequence that enables the
administrator to halt and continue the processor. Continuing operations after
the processor has stopped may corrupt data and is therefore unsupported by
VCS.
When a system is halted with the abort sequence, it stops producing heartbeats.
The other systems in the cluster consider the system failed and take over its
services. If the system is later enabled with another console sequence, it
About communications, membership, and data protection in the cluster 263
Summary of best practices for cluster communications
continues writing to shared storage as before, even though its applications have
been restarted on other systems.
Veritas Technologies recommends disabling the console-abort sequence or
creating an alias to force the go command to perform a restart on systems not
running I/O fencing.
■ Veritas Technologies recommends at least three coordination points to configure
I/O fencing. You can use coordinator disks, CP servers, or a combination of
both.
Select the smallest possible LUNs for use as coordinator disks. No more than
three coordinator disks are needed in any configuration.
■ Do not reconnect the cluster interconnect after a network partition without shutting
down one side of the split cluster.
A common example of this happens during testing, where the administrator may
disconnect the cluster interconnect and create a network partition. Depending
on when the interconnect cables are reconnected, unexpected behavior can
occur.
Chapter 8
Administering I/O fencing
This chapter includes the following topics:
vxfendisk Generates the list of paths of disks in the disk group. This utility
requires that Veritas Volume Manager (VxVM) is installed and
configured.
The I/O fencing commands reside in the /opt/VRTS/bin folder. Make sure you added
this folder path to the PATH environment variable.
Refer to the corresponding manual page for more information on the commands.
Caution: The tests overwrite and destroy data on the disks, unless you use the
-r option.
Administering I/O fencing 266
About the vxfentsthdw utility
■ The two nodes must have SSH (default) or rsh communication. If you use rsh,
launch the vxfentsthdw utility with the -n option.
After completing the testing process, you can remove permissions for
communication and restore public network connections.
■ To ensure both systems are connected to the same disk during the testing, you
can use the vxfenadm -i diskpath command to verify a disk’s serial number.
See “Verifying that the nodes see the same disk” on page 278.
■ For disk arrays with many disks, use the -m option to sample a few disks before
creating a disk group and using the -g option to test them all.
■ The utility indicates a disk can be used for I/O fencing with a message
resembling:
If the utility does not show a message stating a disk is ready, verification has
failed.
■ The -o option overrides disk size-related errors and the utility proceeds with
other tests, however, the disk may not setup correctly as the size may be smaller
than the supported size. The supported disk size for data disks is 256 MB and
for coordinator disks is 128 MB.
■ If the disk you intend to test has existing SCSI-3 registration keys, the test issues
a warning before proceeding.
-f filename Utility tests system and device For testing several disks.
combinations listed in a text file.
See “Testing the shared disks
Can be used with -r and -t listed in a file using the
options. vxfentsthdw -f option”
on page 272.
Administering I/O fencing 268
About the vxfentsthdw utility
-g disk_group Utility tests all disk devices in a For testing many disks and
specified disk group. arrays of disks. Disk groups may
be temporarily created for testing
Can be used with -r and -t
purposes and destroyed
options.
(ungrouped) after testing.
Note: To test the coordinator disk group using the vxfentsthdw utility, the utility
requires that the coordinator disk group, vxfencoorddg, be accessible from two
nodes.
Administering I/O fencing 269
About the vxfentsthdw utility
# vxfentsthdw -c vxfencoorddg
2 Enter the nodes you are using to test the coordinator disks:
3 Review the output of the testing process for both nodes for all disks in the
coordinator disk group. Each disk should display output that resembles:
4 After you test all disks in the disk group, the vxfencoorddg disk group is ready
for use.
# vxfentsthdw -rm
When invoked with the -r option, the utility does not use tests that write to the
disks. Therefore, it does not test the disks for all of the usual conditions of use.
If the failure is due to a bad disk, remove and replace it. The vxfentsthdw utility
indicates a disk can be used for I/O fencing with a message resembling:
Note: For A/P arrays, run the vxfentsthdw command only on active enabled paths.
# vxfentsthdw [-n]
3 After reviewing the overview and warning that the tests overwrite data on the
disks, confirm to continue the process and enter the node names.
4 Enter the names of the disks you are checking. For each node, the disk may
be known by the same name:
If the serial numbers of the disks are not identical, then the test terminates.
5 Review the output as the utility performs the checks and report its activities.
Administering I/O fencing 272
About the vxfentsthdw utility
6 If a disk is ready for I/O fencing on each node, the utility reports success:
7 Run the vxfentsthdw utility for each disk you intend to verify.
Testing the shared disks listed in a file using the vxfentsthdw -f option
Use the -f option to test disks that are listed in a text file. Review the following
example procedure.
To test the shared disks listed in a file
1 Create a text file disks_test to test two disks shared by systems sys1 and
sys2 that might resemble:
where the first disk is listed in the first line and is seen by sys1 as
/dev/rhdisk75 and by sys2 as /dev/rhdisk77. The other disk, in the second
line, is seen as /dev/rhdisk76 from sys1 and /dev/rhdisk78 from sys2.
Typically, the list of disks could be extensive.
2 To test the disks, enter the following command:
# vxfentsthdw -f disks_test
The utility reports the test results one disk at a time, just as for the -m option.
Testing all the disks in a disk group using the vxfentsthdw -g option
Use the -g option to test all disks within a disk group. For example, you create a
temporary disk group consisting of all disks in a disk array and test the group.
Note: Do not import the test disk group as shared; that is, do not use the -s option
with the vxdg import command.
After testing, destroy the disk group and put the disks into disk groups as you need.
Administering I/O fencing 273
About the vxfentsthdw utility
# vxfentsthdw -g test_disks_dg
There are Veritas I/O fencing keys on the disk. Please make sure
that I/O fencing is shut down on all nodes of the cluster before
continuing.
THIS SCRIPT CAN ONLY BE USED IF THERE ARE NO OTHER ACTIVE NODES
IN THE CLUSTER! VERIFY ALL OTHER NODES ARE POWERED OFF OR
INCAPABLE OF ACCESSING SHARED STORAGE.
The utility prompts you with a warning before proceeding. You may continue as
long as I/O fencing is not yet configured.
-s read the keys on a disk and display the keys in numeric, character, and
node format
Note: The -g and -G options are deprecated. Use the -s option.
-r read reservations
-x remove registrations
Refer to the vxfenadm(1M) manual page for a complete list of the command options.
Byte 0 1 2 3 4 5 6 7
where:
■ VF is the unique identifier that carves out a namespace for the keys (consumes
two bytes)
■ cID 0x is the LLT cluster ID in hexadecimal (consumes four bytes)
■ nID 0x is the LLT node ID in hexadecimal (consumes two bytes)
The vxfen driver uses this key format in both sybase mode of I/O fencing.
Administering I/O fencing 275
About the vxfenadm utility
The key format of the data disks that are configured as failover disk groups under
VCS is as follows:
Byte 0 1 2 3 4 5 6 7
Value A+nID V C S
Byte 0 1 2 3 4 5 6 7
where DGcount is the count of disk groups in the configuration (consumes four
bytes).
By default, CVM uses a unique fencing key for each disk group. However, some
arrays have a restriction on the total number of unique keys that can be registered.
In such cases, you can use the same_key_for_alldgs tunable parameter to change
the default behavior. The default value of the parameter is off. If your configuration
hits the storage array limit on total number of unique keys, you can change the
value to on using the vxdefault command as follows:
If the tunable is changed to on, all subsequent keys that the CVM generates on
disk group imports or creates have '0000' as their last four bytes (DGcount is 0).
You must deport and re-import all the disk groups that are already imported for the
changed value of the same_key_for_alldgs tunable to take effect.
The variables such as disk_7, disk_8, and disk_9 in the following procedure
represent the disk names in your setup.
To display the I/O fencing registration keys
1 To display the key for the disks, run the following command:
# vxfenadm -s disk_name
For example:
■ To display the key for the coordinator disk /dev/rhdisk75 from the system
with node ID 1, enter the following command:
# vxfenadm -s /dev/rhdisk75
key[1]:
[Numeric Format]: 86,70,68,69,69,68,48,48
[Character Format]: VFDEED00
* [Node Format]: Cluster ID: 57069 Node ID: 0 Node Name: sys1
The -s option of vxfenadm displays all eight bytes of a key value in three
formats. In the numeric format,
■ The first two bytes, represent the identifier VF, contains the ASCII value
86, 70.
■ The next four bytes contain the ASCII value of the cluster ID 57069
encoded in hex (0xDEED) which are 68, 69, 69, 68.
■ The remaining bytes contain the ASCII value of the node ID 0 (0x00)
which are 48, 48. Node ID 1 would be 01 and node ID 10 would be 0A.
An asterisk before the Node Format indicates that the vxfenadm command
is run from the node of a cluster where LLT is configured and is running.
■ To display the keys on a CVM parallel disk group:
# vxfenadm -s /dev/vx/rdmp/disk_7
# vxfenadm -s /dev/vx/rdmp/disk_8
2 To display the keys that are registered in all the disks specified in a disk file:
For example:
To display all the keys on coordinator disks:
You can verify the cluster ID using the lltstat -C command, and the node
ID using the lltstat -N command. For example:
# lltstat -C
57069
If the disk has keys that do not belong to a specific cluster, then the vxfenadm
command cannot look up the node name for the node ID, and hence prints the
node name as unknown. For example:
For disks with arbitrary format of keys, the vxfenadm command prints all the
fields as unknown. For example:
# vxfenadm -i /dev/rhdisk75
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a
The same serial number information should appear when you enter the
equivalent command on node B using the/dev/rhdisk76 path.
On a disk from another manufacturer, Hitachi Data Systems, the output is
different and may resemble:
# vxfenadm -i /dev/rhdisk76
Vendor id : HITACHI
Product id : OPEN-3
Revision : 0117
Serial Number : 0401EB6F0002
Note: You can use the utility to remove the registration keys and the registrations
(reservations) from the set of coordinator disks for any cluster you specify in the
command, but you can only clear registrations of your current cluster from the CP
servers. Also, you may experience delays while clearing registrations on the
coordination point servers because the utility tries to establish a network connection
with the IP addresses used by the coordination point servers. The delay may occur
because of a network issue or if the IP address is not reachable or is incorrect.
For any issues you encounter with the vxfenclearpre utility, you can you can refer
to the log file at, /var/VRTSvcs/log/vxfen/vxfen.log file.
See “Issues during fencing startup on VCS cluster nodes set up for server-based
fencing” on page 624.
# hastop -all
2 Make sure that the port h is closed on all the nodes. Run the following command
on each node to verify that the port h is closed:
# gabconfig -a
# /etc/init.d/vxfen.rc stop
4 If you have any applications that run outside of VCS control that have access
to the shared storage, then shut down all other nodes in the cluster that have
access to the shared storage. This prevents data corruption.
5 Start the vxfenclearpre script:
# /opt/VRTSvcs/vxfen/bin/vxfenclearpre
Administering I/O fencing 281
About the vxfenclearpre utility
6 Read the script’s introduction and warning. Then, you can choose to let the
script run.
The script cleans up the disks and displays the following status messages.
...................
[10.209.80.194]:50001: Cleared all registrations
[10.209.75.118]:443: Cleared all registrations
Cleaning up the data disks for all shared disk groups ...
You can retry starting fencing module. In order to restart the whole
product, you might want to reboot the system.
# /etc/init.d/vxfen.rc start
# hastart
Administering I/O fencing 282
About the vxfenswap utility
Warning: The cluster might panic if any node leaves the cluster membership before
the vxfenswap script replaces the set of coordinator disks.
3 Estimate the number of coordination points you plan to use as part of the
fencing configuration.
4 Set the value of the FaultTolerance attribute to 0.
Note: It is necessary to set the value to 0 because later in the procedure you
need to reset the value of this attribute to a value that is lower than the number
of coordination points. This ensures that the Coordpoint Agent does not fault.
Administering I/O fencing 284
About the vxfenswap utility
Note: Make a note of the attribute value before you proceed to the next step.
After migration, when you re-enable the attribute you want to set it to the same
value.
You can also run the hares -display coordpoint to find out whether the
LevelTwoMonitorFreq value is set.
# vxfenadm -d
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or
more disks is not accessible.
-C specifies that any import locks are removed.
Administering I/O fencing 285
About the vxfenswap utility
9 If your setup uses VRTSvxvm version, then skip to step 10. You need not set
coordinator=off to add or remove disks. For other VxVM versions, perform
this step:
Where version is the specific release version.
Turn off the coordinator attribute value for the coordinator disk group.
10 To remove disks from the coordinator disk group, use the VxVM disk
administrator utility vxdiskadm.
11 Perform the following steps to add new disks to the coordinator disk group:
■ Add new disks to the node.
■ Initialize the new disks as VxVM disks.
■ Check the disks for I/O fencing compliance.
■ Add the new disks to the coordinator disk group and set the coordinator
attribute value as "on" for the coordinator disk group.
See the Cluster Server Installation Guide for detailed instructions.
Note that though the disk group content changes, the I/O fencing remains in
the same state.
12 From one node, start the vxfenswap utility. You must specify the disk group to
the utility.
The utility performs the following tasks:
■ Backs up the existing /etc/vxfentab file.
■ Creates a test file /etc/vxfentab.test for the disk group that is modified
on each node.
■ Reads the disk group you specified in the vxfenswap command and adds
the disk group to the /etc/vxfentab.test file on each node.
■ Verifies that the serial number of the new disks are identical on all the nodes.
The script terminates if the check fails.
■ Verifies that the new disks can support I/O fencing on each node.
13 If the disk verification passes, the utility reports success and asks if you want
to commit the new set of coordinator disks.
Administering I/O fencing 286
About the vxfenswap utility
14 Confirm whether you want to clear the keys on the coordination points and
proceed with the vxfenswap operation.
15 Review the message that the utility displays and confirm that you want to
commit the new set of coordinator disks. Else skip to step 16.
3 Estimate the number of coordination points you plan to use as part of the
fencing configuration.
4 Set the value of the FaultTolerance attribute to 0.
Note: It is necessary to set the value to 0 because later in the procedure you
need to reset the value of this attribute to a value that is lower than the number
of coordination points. This ensures that the Coordpoint Agent does not fault.
Note: Make a note of the attribute value before you proceed to the next step.
After migration, when you re-enable the attribute you want to set it to the same
value.
# haconf -makerw
# vxfenadm -d
8 Find the name of the current coordinator disk group (typically vxfencoorddg)
that is in the /etc/vxfendg file.
# cat /etc/vxfendg
vxfencoorddg
9 Find the alternative disk groups available to replace the current coordinator
disk group.
10 Validate the new disk group for I/O fencing compliance. Run the following
command:
# vxfentsthdw -c vxfendg
See “Testing the coordinator disk group using the -c option of vxfentsthdw”
on page 268.
Administering I/O fencing 289
About the vxfenswap utility
11 If the new disk group is not already deported, run the following command to
deport the disk group:
14 If the disk verification passes, the utility reports success and asks if you want
to replace the coordinator disk group.
15 Confirm whether you want to clear the keys on the coordination points and
proceed with the vxfenswap operation.
16 Review the message that the utility displays and confirm that you want to
replace the coordinator disk group. Else skip to step 21.
18 Set the coordinator attribute value as "on" for the new coordinator disk group.
# vxdg -g vxfendg set coordinator=on
Set the coordinator attribute value as "off" for the old disk group.
The swap operation for the coordinator disk group is complete now.
21 If you do not want to replace the coordinator disk group, answer n at the prompt.
The vxfenswap utility rolls back any changes to the coordinator disk group.
Administering I/O fencing 291
About the vxfenswap utility
# haconf -makerw
# vxfenadm -d
# cat /etc/vxfendg
vxfencoorddg
# vxfenconfig -l
I/O Fencing Configuration Information:
======================================
Count : 1
Disk List
Disk Name Major Minor Serial Number Policy
6 When the primary site comes online, start the vxfenswap utility on any node
in the cluster:
# vxfenconfig -l
I/O Fencing Configuration Information:
======================================
Single Disk Flag : 0
Count : 3
Disk List
Disk Name Major Minor Serial Number Policy
# vxfenadm -d
3 Run the following command to view the coordinator disks that do not have
keys:
5 On any node, run the following command to start the vxfenswap utility:
6 Verify that the keys are atomically placed on the coordinator disks.
add_cluster – ✓
rm_clus – ✓
add_node ✓ ✓
rm_node ✓ ✓
add_user – ✓
rm_user – ✓
add_clus_to_user – ✓
rm_clus_from_user – ✓
reg_node ✓ ✓
Administering I/O fencing 295
About administering the coordination point server
unreg_node ✓ ✓
preempt_node ✓ ✓
list_membership ✓ ✓
list_nodes ✓ ✓
list_users ✓ ✓
halt_cps – ✓
db_snapshot – ✓
ping_cps ✓ ✓
client_preupgrade ✓ ✓
server_preupgrade ✓ ✓
list_protocols ✓ ✓
list_version ✓ ✓
list_ports – ✓
add_port – ✓
rm_port – ✓
Cloning a CP server
Cloning a CP server greatly reduces the time and effort of assigning a new CP
server to your cluster. Since the cloning replicates the existing CP server, you are
saved from running the vxfenswap utilities on each node connected to the CP
server. Therefore, you can perform the CP server maintenance without much hassle.
In this procedure, the following terminology is used to refer to the existing and
cloned CP servers:
■ cps1: Indicates the existing CP server
■ cps2: Indicates the clone of cps1
Prerequisite: Before cloning a CP server, make sure that the existing CP server
cps1 is up and running and fencing is configured on at least one client of cps1 in
order to verify that the client can talk to cps2 .
Administering I/O fencing 296
About administering the coordination point server
6 On the target system for cps2, stop VCS and perform the following in main.cf:
■ Replace all the instances of system name with cps2 system name.
■ Change the device attribute under NIC and IP resources to cps2 values.
9 Start VCS on the target system for cps2. The CP server service group must
come online on target system for cps2 and the system becomes a clone of
cps1.
# /opt/VRTSvcs/bin/hastart
Administering I/O fencing 297
About administering the coordination point server
10 Run the following command on cps2 to verify if the clone was successful and
is running well.
11 Verify if the Co-ordination Point agent service group is Online on the client
clusters.
# /opt/VRTSvcs/bin/hares -state
Resource Attribute System Value
RES_phantom_vxfen State system1 ONLINE
coordpoint State system1 ONLINE
# /opt/VRTSvcs/bin/hagrp -state
Group Attribute System Value
vxfen State system1 |ONLINE|
Note: The cloned cps2 system must be in the same subnet of cps1. Make sure that
the port (default 443) used for CP server configuration is free for cps2).
host Hostname
■ To remove a user
Type the following command:
domain_type The domain type, for example vx, unixpwd, nis, etc.
Preempting a node
Use the following command to preempt a node.
To preempt a node
◆ Type the following command:
■ To unregister a node
Type the following command:
domain_type The domain type, for example vx, unixpwd, nis, etc.
# /opt/VRTScps/bin/vxcpserv
You can use the cpsadm command if you want to add or remove virtual IP addresses
and ports after your initial CP server setup. However, these virtual IP addresses
and ports that you add or remove does not change the vxcps.conf file. So, these
changes do not persist across CP server restarts.
See the cpsadm(1m) manual page for more details.
Administering I/O fencing 303
About administering the coordination point server
To add and remove virtual IP addresses and ports for CP servers at run-time
1 To list all the ports that the CP server is configured to listen on, run the following
command:
If the CP server has not been able to successfully listen on a given port at least
once, then the Connect History in the output shows never. If the IP addresses
are down when the vxcpserv process starts, vxcpserv binds to the IP addresses
when the addresses come up later. For example:
CP server does not actively monitor port health. If the CP server successfully
listens on any IP:port at least once, then the Connect History for that IP:port
shows once even if the port goes down later during CP server's lifetime. You
can obtain the latest status of the IP address from the corresponding IP resource
state that is configured under VCS.
2 To add a new port (IP:port) for the CP server without restarting the CP server,
run the following command:
For example:
3 To stop the CP server from listening on a port (IP:port) without restarting the
CP server, run the following command:
For example:
Where, DATE is the snapshot creation date, and TIME is the snapshot creation
time.
Note: If multiple clusters share the same CP server, you must perform this
replacement procedure in each cluster.
You can use the vxfenswap utility to replace coordination points when fencing is
running in customized mode in an online cluster, with vxfen_mechanism=cps. The
utility also supports migration from server-based fencing (vxfen_mode=customized)
to disk-based fencing (vxfen_mode=scsi3) and vice versa in an online cluster.
However, if the VCS cluster has fencing disabled (vxfen_mode=disabled), then you
must take the cluster offline to configure disk-based or server-based fencing.
See “Deployment and migration scenarios for CP server” on page 308.
Administering I/O fencing 305
About administering the coordination point server
You can cancel the coordination point replacement operation at any time using the
vxfenswap -a cancel command.
If the VCS cluster nodes are not present here, prepare the new CP server(s)
for use by the VCS cluster.
See the Cluster Server Installation Guide for instructions.
2 Ensure that fencing is running on the cluster using the old set of coordination
points and in customized mode.
For example, enter the following command:
# vxfenadm -d
3 Create a new /etc/vxfenmode.test file on each VCS cluster node with the
fencing configuration changes such as the CP server information.
Review and if necessary, update the vxfenmode parameters for security, the
coordination points, and if applicable to your configuration, vxfendg.
Refer to the text information within the vxfenmode file for additional information
about these parameters and their new possible values.
Administering I/O fencing 306
About administering the coordination point server
4 From one of the nodes of the cluster, run the vxfenswap utility.
The vxfenswap utility requires secure ssh connection to all the cluster nodes.
Use –n to use rsh instead of default ssh. Use –p <protocol>, where <protocol>
can be ssh, rsh, or hacli.
5 Review the message that the utility displays and confirm whether you want to
commit the change.
■ If you do not want to commit the new fencing configuration changes, press
Enter or answer n at the prompt.
# vxfenconfig -l
If the VCS cluster nodes are not present here, prepare the new CP server(s)
for use by the VCS cluster.
See the Cluster Server Installation Guide for instructions.
2 Ensure that fencing is running on the cluster in customized mode using the
coordination points mentioned in the /etc/vxfenmode file.
If the /etc/vxfenmode.test file exists, ensure that the information in it and the
/etc/vxfenmode file are the same. Otherwise, vxfenswap utility uses information
listed in /etc/vxfenmode.test file.
For example, enter the following command:
# vxfenadm -d
================================
Fencing Protocol Version: 201
Fencing Mode: CUSTOMIZED
Cluster Members:
* 0 (sys1)
1 (sys2)
RFSM State Information:
node 0 in state 8 (running)
node 1 in state 8 (running)
# vxfenconfig -l
5 Run the vxfenswap utility from one of the nodes of the cluster.
The vxfenswap utility requires secure ssh connection to all the cluster nodes.
Use -n to use rsh instead of default ssh.
For example:
# vxfenswap [-n]
6 You are then prompted to commit the change. Enter y for yes.
The command returns a confirmation of successful coordination point
replacement.
7 Confirm the successful execution of the vxfenswap utility. If CP agent is
configured, it should report ONLINE as it succeeds to find the registrations on
coordination points. The registrations on the CP server and coordinator disks
can be viewed using the cpsadm and vxfenadm utilities respectively.
Note that a running online coordination point refreshment operation can be
canceled at any time using the command:
# vxfenswap -a cancel
Setup of CP server New CP server New VCS cluster On the designated CP server, perform the following
for a VCS cluster using CP server tasks:
for the first time as coordination
1 Prepare to configure the new CP server.
point
2 Configure the new CP server.
3 Prepare the new CP server for use by the VCS cluster.
Add a new VCS Existing and New VCS cluster On the VCS cluster nodes, configure server-based I/O
cluster to an operational CP fencing.
existing and server
See the Cluster Server Installation Guide for the procedures.
operational CP
server
Replace the New CP server Existing VCS On the designated CP server, perform the following
coordination point cluster using CP tasks:
from an existing server as
1 Prepare to configure the new CP server.
CP server to a new coordination
CP server point 2 Configure the new CP server.
Replace the Operational CP Existing VCS On the designated CP server, prepare to configure the new
coordination point server cluster using CP CP server manually.
from an existing server as
See the Cluster Server Installation Guide for the procedures.
CP server to an coordination
operational CP point On a node in the VCS cluster, run the vxfenswap command
server coordination to move to replace the CP server:
point
See “Replacing coordination points for server-based fencing
in an online cluster” on page 304.
Administering I/O fencing 310
About administering the coordination point server
# hastop -local
# /etc/init.d/vxfen.rc stop
# hastop -local
# /etc/init.d/vxfen.rc stop
Enabling fencing in New CP server Existing VCS On the designated CP server, perform the following
a VCS cluster with cluster with tasks:
a new CP server fencing
1 Prepare to configure the new CP server.
coordination point configured in
scsi3 mode 2 Configure the new CP server
# hastop -local
# /etc/init.d/vxfen.rc stop
Enabling fencing in Operational CP Existing VCS On the designated CP server, prepare to configure the new
a VCS cluster with server cluster with CP server.
an operational CP fencing
See the Cluster Server Installation Guide for this procedure.
server coordination configured in
point disabled mode Based on whether the cluster is online or offline, perform
the following procedures:
# hastop -local
# /etc/init.d/vxfen.rc stop
Refreshing Operational CP Existing VCS On the VCS cluster run the vxfenswap command to refresh
registrations of server cluster using the the keys on the CP server:
VCS cluster nodes CP server as
See “Refreshing registration keys on the coordination points
on coordination coordination
for server-based fencing” on page 306.
points (CP servers/ point
coordinator disks)
without incurring
application
downtime
Administering I/O fencing 314
About migrating between disk-based and server-based fencing configurations
Warning: The cluster might panic if any node leaves the cluster membership before
the coordination points migration operation completes.
Migrating manually
Administering I/O fencing 315
About migrating between disk-based and server-based fencing configurations
Warning: The cluster might panic if any node leaves the cluster membership before
the coordination points migration operation completes.
Migrating manually
3 Make sure that the VCS cluster is online and uses either disk-based or
server-based fencing.
# vxfenadm -d
4 Copy the response file to one of the cluster systems where you want to
configure I/O fencing.
Review the sample files to reconfigure I/O fencing.
See “Sample response file to migrate from disk-based to server-based fencing”
on page 317.
See “Sample response file to migrate from server-based fencing to disk-based
fencing” on page 318.
See “Sample response file to migrate from single CP server-based fencing to
server-based fencing” on page 318.
5 Edit the values of the response file variables as necessary.
See “Response file variables to migrate between fencing configurations”
on page 318.
6 Start the I/O fencing reconfiguration from the system to which you copied the
response file. For example:
$CFG{disks_to_remove}=[ qw(emc_clariion0_62) ];
$CFG{fencing_cps}=[ qw(10.198.89.251)];
$CFG{fencing_cps_ports}{"10.198.89.204"}=14250;
$CFG{fencing_cps_ports}{"10.198.89.251"}=14250;
$CFG{fencing_cps_vips}{"10.198.89.251"}=[ qw(10.198.89.251 10.198.89.204) ];
$CFG{fencing_ncp}=1;
$CFG{fencing_option}=4;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=22462;
$CFG{vcs_clustername}="clus1";
Administering I/O fencing 318
About migrating between disk-based and server-based fencing configurations
$CFG{fencing_disks}=[ qw(emc_clariion0_66) ];
$CFG{fencing_mode}="scsi3";
$CFG{fencing_ncp}=1;
$CFG{fencing_ndisks}=1;
$CFG{fencing_option}=4;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{servers_to_remove}=[ qw([10.198.89.251]:14250) ];
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=42076;
$CFG{vcs_clustername}="clus1";
(Required)
(Required)
CFG {fencing_scsi3_disk_policy} Scalar Specifies the disk policy that the disks
must use.
(Optional)
(Optional)
Administering I/O fencing 321
Enabling or disabling the preferred fencing policy
(Optional)
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haconf -makerw
■ Set the value of the system-level attribute FencingWeight for each node in
the cluster.
Administering I/O fencing 322
Enabling or disabling the preferred fencing policy
For example, in a two-node cluster, where you want to assign sys1 five
times more weight compared to sys2, run the following commands:
# vxfenconfig -a
# haconf -makerw
■ Set the value of the group-level attribute Priority for each service group.
For example, run the following command:
Make sure that you assign a parent service group an equal or lower priority
than its child service group. In case the parent and the child service groups
are hosted in different subclusters, then the subcluster that hosts the child
service group gets higher preference.
■ Save the VCS configuration.
# haconf -makerw
■ Set the value of the site-level attribute Preference for each site.
For example,
# hasite -modify Pune Preference 2
6 To view the fencing node weights that are currently set in the fencing driver,
run the following command:
# vxfenconfig -a
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
3 To disable preferred fencing and use the default race policy, set the value of
the cluster-level attribute PreferredFencingPolicy as Disabled.
# haconf -makerw
# haclus -modify PreferredFencingPolicy Disabled
# haconf -dump -makero
When resource R2 faults, the fault is propagated up the dependency tree to resource
R1. When the critical resource R1 goes offline, VCS must fault the service group
and fail it over elsewhere in the cluster. VCS takes other resources in the service
group offline in the order of their dependencies. After taking resources R3, R4, and
R5 offline, VCS fails over the service group to another node.
When resource R2 faults, the engine propagates the failure up the dependency
tree. Neither resource R1 nor resource R2 are critical, so the fault does not result
in the tree going offline or in service group failover.
Figure 9-4 Scenario 3: Resource with critical parent fails to come online
VCS calls the Clean function for resource R2 and propagates the fault up the
dependency tree. Resource R1 is set to critical, so the service group is taken offline
and failed over to another node in the cluster.
Controlling VCS behavior 327
About controlling VCS behavior at the service group level
VCS tolerates the fault of R2, R3, and R4 as the minimum criteria of 2 is met. When
R5 faults, only R6 is online and the number of online child resources goes below
the minimum criteria of 2 and the fault is propagated up the dependency tree to the
parent resource R1 and service group is taken offline.
For example, NIC is a persistent resource. In some cases, when a system boots
and VCS starts, VCS probes all resources on the system. When VCS probes the
NIC resource, the resource may not be online because the networking is not up
and fully operational. In such situations, VCS marks the NIC resource as faulted,
and does not bring the service group online. However, when the NIC resource
becomes online and if AutoRestart is enabled, the service group is brought online.
Table 9-1 Possible values of the AutoFailover attribute and their description
AutoFailover Description
attribute value
0 VCS does not fail over the service group when a system or service
group faults.
2 VCS automatically fails over the service group only if another suitable
node exists in the same system zone or sites.
If a suitable node does not exist in the same system zone or sites, VCS
brings the service group offline, and generates an alert for
administrator’s intervention. You can manually bring the group online
using the hagrp -online command.
Note: If SystemZones attribute is not defined, the failover behavior is
similar to AutoFailOver=1.
Controlling VCS behavior 329
About controlling VCS behavior at the service group level
FailOverPolicy Description
attribute value
Priority VCS selects the system with the lowest priority as the failover
target. The Priority failover policy is ideal for simple two-node
clusters or small clusters with few service groups.
RoundRobin VCS selects the system running the fewest service groups as
the failover target. This policy is ideal for large clusters running
many service groups with similar server load characteristics (for
example, similar databases or applications)
FailOverPolicy Description
attribute value
This policy can be set only if the cluster attribute Statistics is set
to Enabled. The service group attribute Load is defined in terms
of CPU, Memory, and Swap in absolute units. The unit can be of
the following values:
About AdaptiveHA
AdaptiveHA enables VCS to dynamically select the biggest available target system
to fail over an application when you set the service group attribute FailOverPolicy
to BiggestAvailable. VCS monitors and forecasts the available capacity of systems
in terms of CPU, memory, and swap to select the biggest available system.
After you complete the above steps, AdaptiveHA is enabled. The service group
follows the BiggestAvailable policy during a failover.
The following table provides information on various attributes and the values they
can take to enable AdaptiveHA:
Table 9-3
Use-case Attribute values to be set
To turn off host metering Set the cluster attribute Statistics to Disabled.
To turn on host metering and forecasting Set the cluster attribute Statistics to Enabled.
To enable hagrp -forecast CLI option Set the cluster attribute Statistics to Enabled
and also set the service group attribute Load
based on your application’s CPU, Mem or
Swap usage.
To check the meters supported for any host Verify the value of cluster attribute
or cluster node HostAvailableMeters.
To enable host metering, forecast, and policy Perform the following actions:
decisions using forecast
■ Set the cluster attribute Statistics to
Enabled
To change metering or forecast frequency Set the MeterInterval and ForecastCycle keys
in the cluster attribute MeterControl
accordingly.
To check the available capacity and its Use the following commands to check values
forecast for available capacity and its forecast:
To check if the metering and forecast is The metering value is up-to-date when the
up-to-date difference between GlobalCounter and
AvailableGC is less than or equal to 24.
Note: The MeterWeight settings enable VCS to decide the target system for a child
service group based on the perferred system for the parent service group.
Limitations on AdaptiveHA
When the cluster attribute Statistics in not “disabled”, then VCS meters the real
memory that is assigned and available on the system. By default, memory metering
is enabled.
If AIX advanced features like AME (Active Memory Expansion) or AMS (Active
Memory Sharing) are enabled, the metered values may differ from the actual values
available for the application. The metering of memory on AIX is not supported due
to these advanced memory features. Ensure that the “Mem” key is removed from
the cluster attribute HostMeters. Also, if the FailOverPolicy is set to BiggestAvailable
then the “Mem” key must be removed from the service group attributes Load and
MeterWeight.
Group Gx (
....
Load = { Units = 20 }
)
System N1 (
....
Capacity = 30
)
To update the main.cf file, update the Capacity values for Load and System
attributes as follows:
Group Gx (
....
Load = { Units = 20 }
)
System N1 (
....
Controlling VCS behavior 334
About controlling VCS behavior at the service group level
Capacity = { Units = 30 }
)
■ If the cluster attribute HostMonLogLvl is defined in the main.cf file, then replace
it with Statistics and make the appropriate change from the following:
■ Replace HostMonLogLvl = ALL with Statistics = MeterHostOnly.
■ Replace HostMonLogLvl = AgentOnly with Statistics = MeterHostOnly.
About sites
The SiteAware attribute enables you to create sites to use in an initial failover
decision in campus clusters. A service group can failover to another site even though
a failover target is available within the same site. If the SiteAware attribute is set to
1, you cannot configure SystemZones. You can define site dependencies to restrict
Controlling VCS behavior 335
About controlling VCS behavior at the service group level
dependent applications to fail over within the same site. If the SiteAware attribute
is configured and set to 2, then the service group will failover within the same site.
For example, in a campus cluster with two sites, siteA and siteB, you can define a
site dependency among service groups in a three-tier application infrastructure.
The infrastructure consists of Web, application, and database to restrict the service
group failover within same site.
See “ How VCS campus clusters work” on page 530.
Load-based autostart
VCS provides a method to determine where a service group comes online when
the cluster starts. Setting the AutoStartPolicy to Load instructs the VCS engine,
HAD, to determine the best system on which to start the groups. VCS places service
groups in an AutoStart queue for load-based startup as soon as the groups probe
all running systems. VCS creates a subset of systems that meet all prerequisites
and then chooses the system with the highest AvailableCapacity.
Set AutoStartPolicy = Load and configure the SystemZones attribute to establish
a list of preferred systems on which to initially run a group.
Table 9-4 Possible events when the ManageFaults attribute is set to NONE
To clear a resource
1 Take the necessary actions outside VCS to bring all resources into the required
state.
2 Verify that resources are in the required state by issuing the command:
This command clears the ADMIN_WAIT state for all resources. If VCS continues
to detect resources that are not in the required state, it resets the resources
to the ADMIN_WAIT state.
3 If resources continue in the ADMIN_WAIT state, repeat step 1 and step 2, or
issue the following command to stop VCS from setting the resource to the
ADMIN_WAIT state:
When a service group has a resource in the ADMIN_WAIT state, the following
service group operations cannot be performed on the resource: online, offline,
switch, and flush. Also, you cannot use the hastop command when resources
are in the ADMIN_WAIT state. When this occurs, you must issue the hastop
command with -force option only.
Controlling VCS behavior 338
About controlling VCS behavior at the service group level
When resource R2 faults, the Clean function is called and the resource is marked
as faulted. The fault is not propagated up the tree, and the group is not taken offline.
Note: You cannot set the ProPCV attribute for parallel service groups and for hybrid
service groups.
You can set the ProPCV attribute when the service group is inactive on all the nodes
or when the group is active (ONLINE, PARTIAL, or STARTING) on one node in the
Controlling VCS behavior 340
About controlling VCS behavior at the service group level
cluster. You cannot set the ProPCV attribute if the service group is already online
on multiple nodes in the cluster. See “Service group attributes” on page 680.
If ProPCV is set to 1, you cannot bring online processes that are listed in the
MonitorProcesses attribute or the StartProgram attribute of the application resource
on any other node in the cluster. If you try to start a process that is listed in the
MonitorProcesses attribute or StartProgram attribute on any other node, that process
is killed before it starts. Therefore, the service group does not get into concurrency
violation.
In situations where the propcv action agent function times out, you can use the
amfregister command to manually mark a resource as one of the following:
■ A resource that is allowed to be brought online outside VCS control.
■ A resource that is prevented from being brought online outside VCS control.
Such a ProPCV-enabled resource cannot be online on more than one node in
the cluster.
Controlling VCS behavior 341
About controlling VCS behavior at the service group level
Limitations of ProPCV
The following limitations apply:
■ ProPCV feature is supported only when the Mode value for the IMF attribute of
the Application type resource is set to 3 on all nodes in the cluster.
■ The ProPCV feature does not protect against concurrency in the following cases:
Controlling VCS behavior 342
About controlling VCS behavior at the service group level
■ When you modify the IMFRegList attribute for the resource type.
■ When you modify any value that is part of the IMFRegList attribute for the
resource type.
■ If you configure the application type resource for ProPCV, consider the following:
■ If you run the process with changed order of arguments, the ProPCV feature
does not prevent the execution of the process.
For example, a single command can be run in multiple ways:
/usr/bin/tar -c -f a.tar
/usr/bin/tar -f a.tar -c
The ProPCV feature works only if you run the process the same way as it
is configured in the resource configuration.
■ If there are multiple ways or commands to start a process, ProPCV prevents
the startup of the process only if the process is started in the way specified
in the resource configuration.
■ If the process is started using a script, the script must have the interpreter
path as the first line and start with #!. For example, a shell script should start
with “#!/usr/bin/sh”
■ You can bring processes online outside VCS control on another node when a
failover service group is auto-disabled.
Examples are:
■ When you use the hastop -local command or the hastop -local -force
command on a node.
■ When a node is detected as FAULTED after its ShutdownTimeout value has
elapsed because HAD exited.
In such situations, you can bring processes online outside VCS control on a
node even if the failover service group is online on another node on which VCS
engine is not running.
■ Before you set ProPCV to 1 for a service group, you must ensure that none of
the processes specified in the MonitorProcesses attribute or the StartProgram
attribute of the application resource of the group are running on any node where
the resource is offline. If an application resource lists two processes in its
MonitorProcesses attribute, both processes need to be offline on all nodes in
the cluster. If a node has only one process running and you set ProPCV to 1
for the group, you can still start the second process on another node because
the Application agent cannot perform selective offline monitoring or online
monitoring of individual processes for an application resource
Controlling VCS behavior 343
About controlling VCS behavior at the service group level
■ If you set the attribute to 1: When the application is intentionally stopped outside
of VCS control, the resource enters an OFFLINE state. This attribute does not
affect VCS behavior on application failure. VCS continues to fault resources if
managed corresponding applications fail.
■ If you set the attribute to 0: When the application is intentionally stopped outside
of VCS control, the resource enters a FAULTED state.
OnlineGroup If the configured application is started outside of VCS control, VCS brings
the corresponding service group online. If you attempt to start the
application on a frozen node or service group, VCS brings the
corresponding service group online once the node or the service group is
unfrozen.
OfflineGroup If the configured application is stopped outside of VCS control, VCS takes
the corresponding service group offline.
If the attribute is set to 0, VCS does not treat Monitor timeouts as a resource faults.
If the attribute is set to 1, VCS interprets the timeout as a resource fault and the
agent calls the Clean function to shut the resource down.
By default, the FaultOnMonitorTimeouts attribute is set to 4. This means that the
Monitor function must time out four times in a row before the resource is marked
faulted. The first monitor time out timer and the counter of time outs are reset after
one hour of the first monitor time out.
■ If the Clean function is successful (that is, Clean exit code = 0), VCS examines
the value of the RestartLimit attribute. If Clean fails (exit code = 1), the resource
remains online with the state UNABLE TO OFFLINE. VCS fires the resnotoff
trigger and monitors the resource again.
■ If the Monitor routine does not time out, it returns the status of the resource as
being online or offline.
■ If the ToleranceLimit (TL) attribute is set to a non-zero value, the Monitor cycle
returns offline (exit code = 100) for a number of times specified by the
ToleranceLimit and increments the ToleranceCount (TC). When the
ToleranceCount equals the ToleranceLimit (TC = TL), the agent declares the
resource as faulted.
■ If the Monitor routine returns online (exit code = 110) during a monitor cycle,
the agent takes no further action. The ToleranceCount attribute is reset to 0
when the resource is online for a period of time specified by the ConfInterval
attribute.
If the resource is detected as being offline a number of times specified by the
ToleranceLimit before the ToleranceCount is reset (TC = TL), the resource is
considered faulted.
■ After the agent determines the resource is not online, VCS checks the Frozen
attribute for the service group. If the service group is frozen, VCS declares the
resource faulted and calls the resfault trigger. No further action is taken.
■ If the service group is not frozen, VCS checks the ManageFaults attribute. If
ManageFaults=NONE, VCS marks the resource state as ONLINE|ADMIN_WAIT
and calls the resadminwait trigger. If ManageFaults=ALL, VCS calls the Clean
function with the CleanReason set to Unexpected Offline.
■ If the Clean function fails (exit code = 1) the resource remains online with the
state UNABLE TO OFFLINE. VCS fires the resnotoff trigger and monitors the
resource again. The resource enters a cycle of alternating Monitor and Clean
functions until the Clean function succeeds or a user intervenes.
■ If the Clean function is successful, VCS examines the value of the RestartLimit
(RL) attribute. If the attribute is set to a non-zero value, VCS increments the
RestartCount (RC) attribute and invokes the Online function. This continues till
the value of the RestartLimit equals that of the RestartCount. At this point, VCS
attempts to monitor the resource.
Controlling VCS behavior 349
About controlling VCS behavior at the resource level
■ If the Monitor returns an online status, VCS considers the resource online and
resumes periodic monitoring. If the monitor returns an offline status, the resource
is faulted and VCS takes actions based on the service group configuration.
values that are specified at the service group level. VCS takes action based on
the ManageFaults values that are specified at the resource level.
■ If ManageFaults is set to IGNORE at the resource level, the resource state
changes to OFFLINE|ADMIN_WAIT. The resource-level value overrides the
service group-level value.
■ If ManageFaults is set to ACT at the resource level, VCS calls the Clean function
with the CleanReason set to Online Hung.
■ If resource-level ManageFaults is set to "" or blank, VCS checks the
corresponding service group-level value, and proceeds as follows:
■ If ManageFaults is set to NONE, the resource state changes to
OFFLINE|ADMIN_WAIT.
■ If ManageFaults is set to ALL, VCS calls the Clean function with the
CleanReason set to Online Hung.
A Resource
Offline
Online. Resource
Waiting to Online
Online YES
Timeout?
NO
OWL> YES
OWC? Resource
Offline | Admin_Wait
Manage NONE resadminwait Trigger
NO
Faults
ALL ORC=
ORC+1
Clean. Clean.
“Online Ineffective “Online Hung
NO
Clean
Success?
YES
Reset OWC
ORL > NO
ORC? B
YES
Resource faults.
resfault Trigger
Fault 0
No other resources affected.
Propagation No group failover.
Offline all
resources in
dependent path
YES
Auto 0
Service group offline in
Failover Faulted state.
System NO
Service group offline in
available? Faulted state. Nofailover
trigger.
Failover based on
FailOverPolicy
Note: Disabling a resource is not an option when the entire service group requires
disabling. In that case, set the service group attribute Enabled to 0.
Use the following command to disable the resource when VCS is running:
To have the resource disabled initially when VCS is started, set the resource’s
Enabled attribute to 0 in main.cf.
tree is also cleared. When the PolicyIntention value of 2 for the parent service
group is cleared, the PolicyIntention value of all its child service groups in
dependency tree is also cleared.
This section shows how a service group containing disabled resources is brought
online.
Figure 9-9 shows Resource_3 is disabled. When the service group is brought online,
the only resources brought online by VCS are Resource_1 and Resource_2
(Resource_2 is brought online first) because VCS recognizes Resource_3 is
disabled. In accordance with online logic, the transaction is not propagated to the
disabled resource.
Resource_1
on
ng
oi
G
Resource_2 Resource_3
Resource_3 is disabled.
Resource_4
Resource_4 is offline.
Resource_5
Resource_5 is offline.
Figure 9-10, shows that Resource_2 is disabled. When the service group is brought
online, resources 1, 3, 4 are also brought online (Resource_4 is brought online
first). Note Resource_3, the child of the disabled resource, is brought online because
Resource_1 is enabled and is dependent on it.
Resource_1 Resource_2
Resource_2 is disabled.
Resource_3
Going onling
Resource_4
Controlling VCS behavior 357
Changing agent file paths and binaries
Table 9-6 Disk group state and the failover attribute define VCS behavior
No failover.
No failover.
No failover.
I/O Value of
No 0 Log DG
PanicSyst No failover
fencing emOnDG error
Loss?
Yes
1, 2
Value of 1, 2 Panic system
PanicSyst (dump, crash, and
emOnDG
halt processes)
Loss?
Clean Entry
0 Point
When a service group is brought online, its load is subtracted from the system’s
capacity to determine available capacity. VCS maintains this info in the attribute
AvailableCapacity.
When a failover occurs, VCS determines which system has the highest available
capacity and starts the service group on that system. During a failover involving
multiple service groups, VCS makes failover decisions serially to facilitate a proper
load-based choice.
System capacity is a soft restriction; in some situations, value of the Capacity
attribute could be less than zero. During some operations, including cascading
failures, the value of the AvailableCapacity attribute could be negative.
LoadTimeCounter, which determines how many seconds system load has been
above LoadWarningLevel, is reset.
Prerequisites = { GroupWeight = 1 }
Limits = { GroupWeight = 1 }
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel = 90
LoadTimeThreshold = 600
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel=70
LoadTimeThreshold=300
)
system MedServer1 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
system MedServer2 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G1 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2 , MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0,
Controlling VCS behavior 365
Sample configurations depicting workload management
MedServer1=1, MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
FailOverPolicy = Load
Load = { Units = 100 }
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
include "types.cf"
cluster SGWM-demo
system Server1 (
Capacity = 100
)
system Server2 (
Capacity = 100
)
system Server3 (
Capacity = 100
)
system Server4 (
Capacity = 100
)
group G1 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
FailOverPolicy = Load
Load = { CPU = 1, Mem = 2000 }
)
group G2 (
SystemList = { Server1 = 0, Server2 = 1,
Controlling VCS behavior 366
Sample configurations depicting workload management
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 40 }
)
group G3 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 30 }
)
group G4 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 10 }
)
group G5 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 50 }
)
group G6 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 30 }
)
Controlling VCS behavior 367
Sample configurations depicting workload management
group G7 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 20 }
)
group G8 (
SystemList = { Server1 = 0, Server2 = 1,
Server3 = 2, Server4 = 3 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = { Units = 40 }
)
Server1 80 G1
Server2 60 G2
Server3 70 G3
Server4 90 G4
As the next groups come online, group G5 starts on Server4 because this server
has the highest AvailableCapacity value. Group G6 then starts on Server1 with
Controlling VCS behavior 368
Sample configurations depicting workload management
Server1 50 G1 and G6
Server2 20 G2 and G8
Server3 50 G3 and G7
Server4 40 G4 and G5
In this configuration, Server2 fires the loadwarning trigger after 600 seconds because
it is at the default LoadWarningLevel of 80 percent.
Server2 20 G2 and G8
In this configuration, Server3 fires the loadwarning trigger to notify that the server
is overloaded. An administrator can then switch group G7 to Server1 to balance
the load across groups G1 and G3. When Server4 is repaired, it rejoins the cluster
with an AvailableCapacity value of 100, making it the most eligible target for a
failover group.
Controlling VCS behavior 369
Sample configurations depicting workload management
Table 9-10 Cascading failure scenario for a basic four node cluster
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel = 90
LoadTimeThreshold = 600
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel=70
LoadTimeThreshold=300
)
system MedServer1 (
Controlling VCS behavior 370
Sample configurations depicting workload management
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
system MedServer2 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G1 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2, MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2 }
FailOverPolicy = Load
Load = { Units = 100 }
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G2 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2, MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2 }
FailOverPolicy = Load
Load = { Units = 100 }
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G3 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2, MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
FailOverPolicy = Load
Load = { Units = 30 }
)
Controlling VCS behavior 371
Sample configurations depicting workload management
group G4 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2, MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
FailOverPolicy = Load
Load = { Units = 20 }
)
Semaphores=5
Processors=6
Controlling VCS behavior 372
Sample configurations depicting workload management
Semaphores=5
Processors=6
MedServer1 70 ShrMemSeg=10 G3
Semaphores=5
Processors=6
MedServer2 80 ShrMemSeg=10 G4
Semaphores=5
Processors=6
Semaphores=0
Processors=0
MedServer1 70 ShrMemSeg=10 G3
Semaphores=5
Processors=6
Controlling VCS behavior 373
Sample configurations depicting workload management
MedServer2 80 ShrMemSeg=10 G4
Semaphores=5
Processors=6
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Controlling VCS behavior 374
Sample configurations depicting workload management
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel = 80
LoadTimeThreshold = 900
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel=80
LoadTimeThreshold=900
)
system LargeServer3 (
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel=80
LoadTimeThreshold=900
)
system MedServer1 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer2 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer3 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer4 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer5 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
Controlling VCS behavior 375
Sample configurations depicting workload management
group Database1 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2, LargeServer3 }
FailOverPolicy = Load
Load = { Units = 100 }
Prerequisites = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Database2 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2, LargeServer3 }
FailOverPolicy = Load
Load = { Units = 100 }
Prerequisites = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Database3 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
Controlling VCS behavior 376
Sample configurations depicting workload management
group Application1 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = { Units = 50 }
)
group Application2 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Controlling VCS behavior 377
Sample configurations depicting workload management
Load = { Units = 50 }
)
group Application3 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = { Units = 50 }
)
group Application4 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = { Units = 50 }
)
group Application5 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
LargeServer3 = 2, MedServer1 = 3,
MedServer2 = 4, MedServer3 = 5,
Controlling VCS behavior 378
Sample configurations depicting workload management
MedServer4 = 6, MedServer5 = 6 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = { Units = 50 }
)
Database1 LargeServer1
Database2 LargeServer2
Database3 LargeServer3
Application1 MedServer1
Application2 MedServer2
Application3 MedServer3
Application4 MedServer4
Application5 MedServer5
Semaphores=20
Processors=12
Semaphores=20
Processors=12
Semaphores=20
Processors=12
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Processors=6
identifies systems that meet the group’s prerequisites. In this case, LargeServer1
and LargeServer2 meet the required Limits. Database3 is brought online on
LargeServer1. This results in the following configuration:
Table 9-14 shows the failure scenario for a complex eight-node cluster running
multiple applications and large databases.
Table 9-14 Failure scenario for a complex eight-node cluster running multiple
applications and large databases
Semaphores=10
Processors=6
Semaphores=20
Processors=12
In this scenario, further failure of either system can be tolerated because each has
sufficient Limits available to accommodate the additional service group.
Dependency categories
Dependency categories determine the relationship of the parent group with the
state of the child group.
Table 10-1 shows dependency categories and relationships between parent and
child service groups.
Online group The parent group must wait for the child group to be brought online
dependency before it can start.
Offline group The parent group can be started only if the child group is offline and
dependency vice versa. This behavior prevents conflicting applications from running
on the same system.
Dependency location
The relative location of the parent and child service groups determines whether the
dependency between them is a local, global, remote or site.
Table 10-2 shows the dependency locations for local, global, remote and site
dependencies.
The role of service group dependencies 383
About service group dependencies
Local dependency The parent group depends on the child group being online or offline on
the same system.
Global dependency An instance of the parent group depends on one or more instances of
the child group being online on any system in the cluster.
Site dependency An instance of the parent group depends on one or more instances of
the child group being online on any system in the same site.
Dependency rigidity
The type of dependency defines the rigidity of the link between parent and child
groups. A soft dependency means minimum constraints, whereas a hard dependency
means maximum constraints.
Table 10-3 shows dependency rigidity and associated constraints.
Soft dependency Specifies the minimum constraints while bringing parent and child groups
online. The only constraint is that the child group must be online before
the parent group is brought online.
■ If the child group faults, VCS does not immediately take the parent
offline. If the child group cannot fail over, the parent remains online.
■ When both groups are online, either group, child or parent, may be
taken offline while the other remains online.
■ If the parent group faults, the child group remains online.
■ When the link is created, the child group need not be online if the
parent is online. However, when both groups are online, their online
state must not conflict with the type of link.
The role of service group dependencies 384
About service group dependencies
Firm dependency Imposes more constraints when VCS brings the parent or child groups
online or takes them offline. In addition to the constraint that the child
group must be online before the parent group is brought online, the
constraints include:
■ If the child group faults, the parent is taken offline. If the parent is
frozen at the time of the fault, the parent remains in its original state.
If the child cannot fail over to another system, the parent remains
offline.
■ If the parent group faults, the child group may remain online.
■ The child group cannot be taken offline if the parent group is online.
The parent group can be taken offline while the child is online.
■ When the link is created, the parent group must be offline. However,
if both groups are online, their online state must not conflict with the
type of link.
Hard dependency Imposes the maximum constraints when VCS brings the parent of child
service groups online or takes them offline. For example:
■ If a child group faults, the parent is taken offline before the child
group is taken offline. If the child group fails over, the parent fails
over to another system (or the same system for a local dependency).
If the child group cannot fail over, the parent group remains offline.
■ If the parent faults, the child is taken offline. If the child fails over,
the parent fails over. If the child group cannot fail over, the parent
group remains offline.
Note: When the child faults, if the parent group is frozen, the parent
remains online. The faulted child does not fail over.
■ You cannot link two service groups whose current states violate the relationship.
For example, all link requests are accepted if all instances of parent group are
offline.
All link requests are rejected if parent group is online and child group is offline,
except in offline dependencies and in soft dependencies.
All online global link requests, online remote link requests, and online site link
requests to link two parallel groups are rejected.
All online local link requests to link a parallel parent group to a failover child
group are rejected.
■ Linking service groups using site dependencies:
■ If the service groups to be linked are online on different sites, you cannot
use site dependency to link them.
■ All link requests to link parallel or hybrid parent groups to a failover or hybrid
child service group are rejected.
■ If two service groups are already linked using a local, site, remote, or global
dependency, you must unlink the existing dependency and use site
dependency. However, you can configure site dependency with other online
dependencies in multiple child or multiple parent configurations.
online local soft Failover Child Child is online Parent stays Child stays
online on same on same online. online.
system. system.
If Child fails over
to another
system, Parent
migrates to the
same system.
If Child cannot
fail over, Parent
remains online.
online local firm Failover Child Child is online Parent taken Child stays
online on same on same offline. online.
system. system.
If Child fails over
to another
system, Parent
migrates to the
same system.
If Child cannot
fail over, Parent
remains offline.
online local hard Failover Child Child is online Parents taken Child taken
online on same on same offline before offline.
system. system. Child is taken
If Child fails
offline.
over, Parent
If Child fails over migrates to the
to another same system.
system, Parent
If Child cannot
migrates to the
fail over, Parent
same system.
remains offline.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 387
Service group dependency configurations
online global soft Failover Child Child is online Parent stays Child stays
online somewhere in online. online.
somewhere in the cluster.
If Child fails over Parent fails over
the cluster.
to another to any available
system, Parent system.
remains online.
If no failover
If Child cannot target system is
fail over, Parent available, Parent
remains online. remains offline.
online global firm Failover Child Child is online Parent taken Child stays
online somewhere in offline after online.
somewhere in the cluster. Child is taken
Parent fails over
the cluster. offline.
to any available
If Child fails over system.
to another
If no failover
system, Parent
target system is
is brought online
available, Parent
on any system.
remains offline.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 388
Service group dependency configurations
online remote soft Failover Child Child is online If Child fails over Child stays
online on on another to the system on online.
another system system in the which Parent
Parent fails over
in the cluster. cluster. was online,
to a system
Parent migrates
where Child is
to another
not online.
system.
If the only
If Child fails over
system available
to another
is where Child is
system, Parent
online, Parent is
continues to run
not brought
on original
online.
system.
If no failover
If Child cannot
target system is
fail over, Parent
available, Child
remains online.
remains online.
online remote firm Failover Child Child is online If Child fails over Parent fails over
online on on another to the system on to a system
another system system in the which Parent where Child is
in the cluster. cluster. was online, not online.
Parent switches
If the only
to another
system available
system.
is where Child is
If Child fails over online, Parent is
to another not brought
system, Parent online.
restarts on
If no failover
original system.
target system is
If Child cannot available, Child
fail over, VCS remains online.
takes the parent
offline.
The role of service group dependencies 389
Service group dependency configurations
online site soft Failover Child Child is online in Parent stays Child remains
online on same the same site. online. online.
site.
If another Child Parent fails over
instance is to another
online or Child system in the
fails over to a same site
system within maintaining
the same site, dependency on
Parent stays Child instances
online. in the same site.
online site firm Failover Child Child is online in Parent taken Child remains
online in the the same site. offline. online.
same site
If another Parent fails over
instance of child to another
is online in the system in same
same site or site maintaining
child fails over to dependence on
another system Child instances
in the same site, in the same site.
Parent migrates
If Parent cannot
to a system in
failover to a
the same site.
system within
If no Child same site, then
instance is Parent fails over
online or Child to a system in
cannot fail over, another site
Parent remains where at least
offline. one instance of
child is online.
The role of service group dependencies 391
Service group dependency configurations
offline local Failover Child Child is offline If Child fails over Parent fails over
offline on the on the same to the system on to system on
same system system. which parent in which Child is
not running, not online.
parent continues
If no failover
running.
target system is
If child fails over available, Child
to system on remains online.
which parent is
running, parent
switches to
another system,
if available.
If no failover
target system is
available for
Child to fail over
to, Parent
continues
running.
online local soft Instance of Instance of Child If Child instance Parent fails over
parallel Child is online on fails over to to other system
group on same same system. another system, and depends on
system. the Parent also Child instance
fails over to the there.
same system.
Child Instance
If Child instance remains online
cannot failover where the
to another Parent faulted.
system, Parent
remains online.
online local firm Instance of Instance of Child Parent is taken Parent fails over
parallel Child is online on offline. Parent to other system
group on same same system. fails over to and depends on
system. other system Child instance
and depends on there.
Child instance
Child Instance
there.
remains online
where Parent
faulted.
online global soft All instances of At least one Parent remains Parent fails over
parallel Child instance of Child online if Child to another
group online in group is online faults on any system,
the cluster. somewhere in system. maintaining
the cluster. dependence on
If Child cannot
all Child
fail over to
instances.
another system,
Parent remains
online.
The role of service group dependencies 393
Service group dependency configurations
online global firm One or more An instance of Parent is taken Parent fails over
instances of Child group is offline. to another
parallel Child online system,
If another Child
group remaining somewhere in maintaining
instance is
online the cluster. dependence on
online or Child
all Child
fails over,
instances.
Parent fails over
to another
system or the
same system.
If no Child
instance is
online or Child
cannot fail over,
Parent remains
offline.
online remote soft One or more One or more Parent remains Parent fails over
instances instances of online. to another
parallel Child Child group are system,
If Child fails over
group remaining online on other maintaining
to the system on
online on other systems. dependence on
which Parent is
systems. the Child
online, Parent
instances.
fails over to
another system.
The role of service group dependencies 394
Service group dependency configurations
online remote firm All instances All instances of Parent is taken Parent fails over
parallel Child Child group are offline. to another
group remaining online on other system,
If Child fails over
online on other systems. maintaining
to the system on
systems. dependence on
which Parent is
all Child
online, Parent
instances.
fails over to
another system.
online site soft One or more At least one Parent stays Child stays
instances of instance of Child online if child is online.
parallel Child is online in same online on any
Parents fails
group in the site. system in the
over to a system
same site. same site.
with Child online
If Child fails over in the same site.
to a system in
If the parent
another site,
group cannot
Parent stays in
failover, child
the same site.
group remains
If Child cannot online
fail over, Parent
remains online.
The role of service group dependencies 395
Service group dependency configurations
online site firm One or more At least one Parent stays Child stays
instances of instance of Child online if any online.
parallel Child is online in same instance of child
Parents fails
group in the site. is online in the
over to a system
same site. same site.
with Child online
If Child fails over in the same site.
to another
If the parent
system, Parent
group cannot
migrates to a
failover, child
system in the
group remains
same site.
online.
If Child cannot
fail over, Parent
remains offline.
online global soft Failover Child Failover Child is Parent remains Child remains
group online online online. online
somewhere in somewhere in
the cluster. the cluster.
online global firm Failover Child Failover Child is All instances of Child stays
group online Parent taken online.
somewhere in somewhere in offline.
the cluster. the cluster.
After Child fails
over, Parent
instances are
failed over or
restarted on the
same systems.
online remote soft Failover Child Failover Child is If Child fails over Child remains
group on online on to system on online. Parent
another system. another system. which Parent is tries to fail over
online, Parent to another
fails over to system where
other systems. child is not
online.
If Child fails over
to another
system, Parent
remains online.
The role of service group dependencies 397
Service group dependency configurations
online remote firm Failover Child Failover Child is All instances of Child remains
group on online on Parent taken online. Parent
another system. another system. offline. tries to fail over
to another
If Child fails over
system where
to system on
child is not
which Parent
online.
was online,
Parent fails over
to other
systems.
offline local Failover Child Failover Child is Parent remains Child remains
offline on same not online on online if Child online.
system. same system. fails over to
another system.
Table 10-7 shows service group dependency configurations for parallel parent /
parallel child.
online local soft Parallel Child Parallel Child If Child fails over Child instance
instance online instance is to another stays online.
on same online on same system, Parent
Parent instance
system. system. migrates to the
can fail over
same system as
only to system
the Child.
where Child
If Child cannot instance is
fail over, Parent running and
remains online. other instance of
Parent is not
running.
online local firm Parallel Child Parallel Child Parent taken Child stays
instance online instance is offline. online.
on same online on same
If Child fails over Parent instance
system. system.
to another can fail over
system, VCS only to system
brings an where Child
instance of the instance is
Parent online on running and
the same other instance of
system as Child. Parent is not
running.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 399
Frequently asked questions about group dependencies
Online local Can child group be taken offline when parent group is online?
Online global Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Soft=Yes Firm=Yes.
Soft=Yes Firm=No
Online remote Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Offline local Can parent group be brought online when child group is offline?
Yes.
Yes.
Online site Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Can parent group be switched to a system in the same site while child
group is running?
Soft=Yes Firm=Yes.
Soft=Yes Firm=No
The role of service group dependencies 401
About linking service groups
■ For a parallel child group linked online remote with failover parent, multiple
instances of child group online are acceptable, as long as child group does not
go online on the system where parent is online.
■ For a parallel child group linked offline local with failover/parallel parent, multiple
instances of child group online are acceptable, as long as child group does not
go online on the system where parent is online.
is equal to or greater than the level specified. For example, if you configure recipients
for notifications and specify the severity level as Warning, VCS notifies the recipients
about events with the severity levels Warning, Error, and SevereError but not about
events with the severity level Information.
See “About attributes and their definitions” on page 654.
Figure 11-1 shows the severity levels of VCS events.
SevereError Critical errors that can lead to data loss or corruption; SevereError is
the highest severity level.
Error Faults
Information Important events that exhibit normal behavior; Information is the lowest
severity level.
SNMP
SMTP
SNMP
SMTP Error
SevereError
Information
notifier
HAD HAD
System A System B
SNMP traps are forwarded to the SNMP console. Typically, traps are predefined
for events such as service group or resource faults. You can use the hanotify utility
to send additional traps.
VCS event notification 407
About VCS event notification
one of the recipients, it deletes the message from its queue. For example, if two
SNMP consoles and two email recipients are designated, notifier sends an
acknowledgement to HAD even if the message reached only one of the four
recipients. If HAD does not get acknowledgement for some messages, it keeps on
sending these notifications to notifier after every 180 seconds till it gets an
acknowledgement of delivery from notifier. An error message is printed to the log
file when a delivery error occurs.
HAD deletes messages under the following conditions too:
■ The message has been in the queue for time (in seconds) specified in
MessageExpiryInterval attribute (default value: one hour) and notifier is unable
to deliver the message to the recipient.
■ The message queue is full and to make room for the latest message, the earliest
message is deleted.
addresses. You can specify more than one manager or server, and the severity
level of messages that are sent to each.
Note: If you start the notifier outside of VCS control, use the absolute path of the
notifier in the command. VCS cannot monitor the notifier process if it is started
outside of VCS control using a relative path.
/opt/VRTSvcs/bin/notifier -s m=north -s
m=south,p=2000,l=Error,c=your_company
-t m=north,e="abc@your_company.com",l=SevereError
notifier
hanotify
had had
System A System B
Remote cluster is in RUNNING state. Information Local cluster has complete snapshot
of the remote cluster, indicating the
(Global Cluster Option)
remote cluster is in the RUNNING
state.
User has logged on to VCS. Information A user log on has been recognized
because a user logged on by Cluster
Manager, or because a haxxx
command was invoked.
Agent is faulted. Warning The agent has faulted on one node in the
cluster.
Resource monitoring has timed out. Warning Monitoring mechanism for the
resource has timed out.
Resource is not going offline. Warning VCS cannot take the resource
offline.
Resource went online by itself. Warning (not The resource was brought
for first online on its own.
probe)
The health of cluster resource improved. Information Used by agents to give extra
information about state of
resource. An improvement in
the health of the resource was
identified during monitoring.
VCS event notification 413
About VCS events and traps
Resource monitor time has changed. Warning This trap is generated when
statistical analysis for the time
taken by the monitor function of
an agent is enabled for the
agent.
VCS is up on the first node in the cluster. Information VCS is up on the first
node.
A node running VCS has joined cluster. Information The cluster has a new
node that runs VCS.
CPU usage exceeded threshold on the system. Warning The system’s CPU usage
exceeded the Warning
threshold level set in the
CPUThreshold attribute.
Swap usage exceeded threshold on the system. Warning The system’s swap usage
exceeded the Warning
threshold level set in the
SwapThreshold attribute.
Service group has faulted and cannot be SevereError Specified service group faulted
failed over anywhere. on all nodes where group
could be brought online. There
are no nodes to which the
group can fail over.
Attributes for global service groups are Error The attributes ClusterList,
mismatched. AutoFailOver, and Parallel are
mismatched for the same
(Global Cluster Option)
global service group on
different clusters.
SNMP-specific files
VCS includes two SNMP-specific files: vcs.mib and vcs_trapd, which are created
in:
/etc/VRTSvcs/snmp
VCS event notification 416
About VCS events and traps
The file vcs.mib is the textual MIB for built-in traps that are supported by VCS. Load
this MIB into your SNMP console to add it to the list of recognized traps.
The file vcs_trapd is specific to the HP OpenView Network Node Manager (NNM)
SNMP console. The file includes sample events configured for the built-in SNMP
traps supported by VCS. To merge these events with those configured for SNMP
traps:
When you merge events, the SNMP traps sent by VCS by way of notifier are
displayed in the HP OpenView NNM SNMP console.
About severityId
This variable indicates the severity of the trap being sent.
Table 11-7 shows the values that the variable severityId can take.
Information 0
Warning 1
Error 2
A fault
Severe Error 3
Group String
Heartbeat String
VCS String
GCO String
About entityState
This variable describes the state of the entity.
Table 11-9 shows the the various states.
VCS event notification 418
About VCS events and traps
Entity States
Entity States
GCO heartbeat states ■ Cluster has lost heartbeat with remote cluster
■ Heartbeat with remote cluster is alive
Note: You must configure appropriate severity for the notifier to receive these
notifications. To receive VCS notifications, the minimum acceptable severity level
is Information.
See the Cluster Server Bundled Agents Reference Guide for more information
about the agent.
VCS provides several methods for configuring notification:
■ Manually editing the main.cf file.
■ Using the Notifier wizard.
Chapter 12
VCS event triggers
This chapter includes the following topics:
VCS does not wait for the trigger to complete execution. VCS calls the trigger and
continues normal operation.
VCS invokes event triggers on the system where the event occurred, with the
following exceptions:
■ VCS invokes the sysoffline and nofailover event triggers on the lowest-numbered
system in the RUNNING state.
■ VCS invokes the violation event trigger on all systems on which the service
group was brought partially or fully online.
VCS event triggers 422
Using event triggers
By default, the hatrigger script invokes the trigger script(s) from the default path
$VCS_HOME/bin/triggers. You can customize the trigger path by using the
TriggerPath attribute.
See “Resource attributes” on page 655.
See “Service group attributes” on page 680.
The same path is used on all nodes in the cluster. The trigger path must exist on
all the cluster nodes. On each cluster node, the trigger scripts must be installed in
the trigger path.
Note: As a good practice, ensure that one script does not affect the functioning of
another script. If script2 takes the output of script1 as an input, script2 must be
capable of handling any exceptions that arise out of the behavior of script1.
VCS event triggers 423
List of event triggers
Description The dumptunables trigger is invoked when HAD goes into the RUNNING state.
When this trigger is invoked, it uses the HAD environment variables that it
inherited, and other environment variables to process the event. Depending on
the value of the to_log parameter, the trigger then redirects the environment
variables to either stdout or the engine log.
Description On the system having lowest NodeId in the cluster, VCS periodically broadcasts
an update of GlobalCounter. If a node does not receive the broadcast for an
interval greater than CounterMissTolerance, it invokes the
globalcounter_not_updated trigger if CounterMissAction is set to Trigger. This
event is considered critical since it indicates a problem with underlying cluster
communications or cluster interconnects. Use this trigger to notify administrators
of the critical events.
VCS event triggers 424
List of event triggers
Description Invoked when a system becomes overloaded because the load of the
system’s online groups exceeds the system’s LoadWarningLevel
attribute for an interval exceeding the LoadTimeThreshold attribute.
Use this trigger to notify the administrator of the critical event. The
administrator can then switch some service groups to another system,
ensuring that no one system is overloaded.
VCS provides a sample trigger script for your reference. You can
customize the sample script according to your requirements.
VCS event triggers 426
List of event triggers
Description This event trigger is invoked on the system where the group went offline
from a partial or fully online state. This trigger is invoked when the group
faults, or is taken offline manually.
Description This event trigger is invoked on the system where the group went online
from an offline state.
Description Indicates when the HAD should call a user-defined script before bringing
a service group online in response to the hagrp -online command
or a fault.
If the trigger does not exist, VCS continues to bring the group online.
If the script returns 0 without an exit code, VCS runs the hagrp
-online -nopre command, with the -checkpartial option if
appropriate.
If you do want to bring the group online, define the trigger to take no
action. This event trigger is configurable.
To enable the Set the PreOnline attribute in the service group definition to 1.
trigger
You can set a local (per-system) value for the attribute to control
behavior on each node in the cluster.
To disable the Set the PreOnline attribute in the service group definition to 0.
trigger
You can set a local (per-system) value for the attribute to control
behavior on each node in the cluster.
0 = The offline function did not complete within the expected time.
2 = The online function did not complete within the expected time.
Description Invoked on the system where a resource has faulted. Note that when
a resource is faulted, resources within the upward path of the faulted
resource are also brought down.
To enable the To invoke the trigger when a resource faults, set the TriggerResFault
trigger attribute to 1.
Description This trigger is invoked when a resource is restarted by an agent because resource
faulted and RestartLimit was greater than 0.
VCS event triggers 431
List of event triggers
To enable This event trigger is not enabled by default. You must enable resrestart by setting
the trigger the attribute TriggerResRestart to 1 in the main.cf file, or by issuing the command:
To enable the This event trigger is not enabled by default. You must enable
trigger resstatechange by setting the attribute TriggerResStateChange to 1 in
the main.cf file, or by issuing the command:
Description The sysup trigger is invoked when the first node joins the cluster.
Description The sysjoin trigger is invoked when a peer node joins the cluster.
Description This trigger is invoked when an agent faults more than a predetermined
number of times with in an hour. When this occurs, VCS gives up trying
to restart the agent. VCS invokes this trigger on the node where the
agent faults.
You can use this trigger to notify the administrators that an agent has
faulted, and that VCS is unable to restart the agent. The administrator
can then take corrective action.
To disable the Remove the files associated with the trigger from the
trigger $VCS_HOME/bin/triggers directory.
VCS event triggers 434
List of event triggers
Usage -unable_to_restart_had
Description This trigger is invoked only on the system that caused the concurrency
violation. Specifically, it takes the service group offline on the system
where the trigger was invoked. Note that this trigger applies to failover
groups only. The default trigger takes the service group offline on the
system that caused the concurrency violation.
■ Stop Virtual Business Services from the Veritas InfoScale Operations Manager
console: When a virtual business service stops, its associated service groups
are taken offline.
■ Applications that are under the control of ApplicationHAcan be part of a virtual
business service. ApplicationHAenables starting, stopping, and monitoring of
an application within a virtual machine. If applications are hosted on VMware
virtual machines, you can configure the virtual machines to automatically start
or stop when you start or stop the virtual business service, provided the vCenter
for those virtual machines has been configured in Veritas InfoScale Operations
Manager.
■ Define dependencies between service groups within a virtual business service:
The dependencies define the order in which service groups are brought online
and taken offline. Setting the correct order of service group dependency is critical
to achieve business continuity and high availability. You can define the
dependency types to control how a tier reacts to high availability events in the
underlying tier. The configured reaction could be execution of a predefined policy
or a custom script.
■ Manage the virtual business service from Veritas InfoScale Operations Manager
or from the clusters participating in the virtual business service.
■ Recover the entire virtual business service to a remote site when a disaster
occurs.
Each time you start the Finance business application, typically you need to bring
the components online in the following order – Oracle database, WebSphere,
Apache and IIS. In addition, you must bring the virtual machines online before you
start the Web tier. To stop the Finance application, you must take the components
offline in the reverse order. From the business perspective, the Finance service is
unavailable if any of the tiers becomes unavailable.
When you configure the Finance application as a virtual business service, you can
specify that the Oracle database must start first, followed by WebSphere and the
Web servers. The reverse order automatically applies when you stop the virtual
business service. When you start or stop the virtual business service, the
components of the service are started or stopped in the defined order.
Virtual Business Services 438
About choosing between VCS and VBS level dependencies
For more information about Virtual Business Services, refer to the Virtual Business
Service–Availability User’s Guide.
■ Chapter 15. Administering application monitoring from the Veritas High Availability
view
Chapter 14
Introducing the Veritas
High Availability
Configuration wizard
This chapter includes the following topics:
See “Launching the Veritas High Availability Configuration wizard” on page 441.
For information on the procedure to configure a particular application, refer to the
application-specific agent guide.
Note: In the Storage Foundation and High Availability (SFHA) 6.2 release and later,
the haappwizard utility has been deprecated.
–hostname Allows you to specify the host name or IP address of the system
from which you want to launch the Veritas High Availability
Configuration wizard. If you do not specify a host name or IP address,
the Veritas High Availability Configuration wizard is launched on the
local host.
Introducing the Veritas High Availability Configuration wizard 443
Typical VCS cluster configuration in a physical environment
-browser Allows you to specify the browser name. The supported browsers
are Internet Explorer and Firefox. For example, enter iexplore for
Internet Explorer and firefox for Firefox.
Note: The value is case-insensitive.
components and services, and the storage and network components that the
application uses.
During a fault, the VCS storage agents fail over the storage to a new system. The
VCS network agents bring the network components online and the
application-specific agents then start application services on the new system.
Chapter 15
Administering application
monitoring from the
Veritas High Availability
view
This chapter includes the following topics:
In a VMware virtual environment, you can embed the Veritas High Availability view
as a tab in the vSphere Client menu (both the desktop and Web versions).
You can perform tasks such as start, stop, or fail over an application, without the
need for specialized training in VCS commands, operating systems, or virtualization
technologies. You can also perform advanced tasks such as VCS cluster
configuration and application (high availability) configuration from this view, by using
simple wizard-based steps.
Use the Veritas High Availability view to perform the following tasks:
■ To configure a VCS cluster
■ To add a system to a VCS cluster
■ To configure and unconfigure application monitoring
■ To unconfigure the VCS cluster
■ To start and stop configured applications
■ To add and remove failover systems
■ To enter and exit maintenance mode
■ To switch an application
■ To determine the state of an application (components)
■ To clear Fault state
■ To resolve a held-up operation
■ To modify application monitoring settings
■ To view application dependency
■ To view component dependency
Note: If the local system is not part of any VCS cluster, then the Veritas Application
High Availability view displays only the following link: Configure a VCS cluster.
If you are yet to configure an application for monitoring in a cluster of which the
local system is a member, then the Veritas Application High Availability view displays
only the following link: Configure an application for high availability.
The Veritas High Availability view uses icons, color coding, dependency graphs,
and tool tips to report the detailed status of an application.
The Veritas High Availability view displays complex applications, in terms of multiple
interdependent instances of that application. Such instances represent component
groups (also known as service groups) of that application. Each service group in
turn includes several critical components (resources) of the application.
The following figure shows the Veritas High Availability view, with one instance of
Oracle Database and one instance of a generic application configured for high
availability in a two-node VCS cluster:
dependency graph for an application, you must click a system on which the
application is running.
The track pad, at the right-bottom corner helps you navigate through component
dependency graphs.
If you do not want to view the component dependency graph, in the top left
corner of the application row, click Close.
State Description
Offline Indicates that the configured application or its components are not
running on the system.
Partial Indicates that either the application or its components are being
started on the system or VCS was unable to start one or more of the
configured components
The view provides you with specific links to perform the following configuration
tasks:
■ Configure a VCS cluster:
If the local system is not part of a VCS cluster, the Veritas High Availability view
appears blank, except for the link Configure a VCS cluster.
Before you can configure application monitoring, you must click the Configure
a VCS cluster link to launch the VCS Cluster Configuration wizard. The wizard
helps you configure a VCS cluster, where the local system is by default a cluster
system.
■ Configure the first application for monitoring in a VCS cluster:
If you have not configured any application for monitoring in the cluster, the
Veritas High Availability view appears blank except for the link Configure an
application for high availability.
Click the link to launch the Veritas High Availability Configuration Wizard. Use
the wizard to configure application monitoring.
Also, in applications where the Veritas High Availability Configuration Wizard is
supported, for detailed wizard-based steps, see the application-specific VCS
agent configuration guide. For custom applications, see the Cluster Server
Generic Application Agent Configuration Guide.
■ Add a system to the VCS cluster:
Click Actions > Add a System to VCS Cluster to add a system to a VCS cluster
where the local system is a cluster member. Adding a system to the cluster does
not automatically add the system as a failover system for any configured
application. To add a system as a failover system, see (add a cross-reference
to 'To add or remove a failover system'.
■ Configure an application or add an application to the existing application
monitoring configuration:
Click Actions > Configure an application for high availability to launch the
Veritas High Availability Application Monitoring Configuration Wizard. Use the
wizard to configure application monitoring.
■ Unconfigure monitoring of an application:
In the appropriate row of the application table, click More > Unconfigure
Application Monitoring to delete the application monitoring configuration from
the VCS.
Note that this step does not remove VCS from the system or the cluster, this
step only removes the monitoring configuration for that application.
Also, to unconfigure monitoring for an application, you can perform one of the
following procedures: unconfigure monitoring of all applications, or unconfigure
VCS cluster.
■ Unconfigure monitoring of all applications:
Administering application monitoring from the Veritas High Availability view 451
Administering application monitoring from the Veritas High Availability view
Click Actions > Unconfigure all applications. This step deletes the monitoring
configuration for all applications configured in the cluster.
■ Unconfigure VCS cluster:
Click Actions > Unconfigure VCS cluster. This step stops the VCS cluster,
removes VCS cluster configuration, and unconfigures application monitoring.
Note: Based on service group type, either the Any system or the All Systems
link automatically appears.
To learn more about policies, and parallel and failover service groups, see the
Cluster Server Administrator's Guide.
If you want to specify the system where you want to start the application, click
User selected system, and then click the appropriate system.
3 If the application that you want to start requires other applications or component
groups (service groups) to start in a specific order, then check the Start the
dependent components in order check box, and then click OK.
Administering application monitoring from the Veritas High Availability view 452
Administering application monitoring from the Veritas High Availability view
To stop an application
1 In the appropriate row of the application table, click Stop.
2 If the application (service group) is of the failover type, in the Stop Application
Panel, click Any system. VCS selects the appropriate system to stop the
application.
If the application (service group) is of the parallel type, in the Stop Application
Panel click All systems. VCS stops the application on all configured systems.
Note: Based on service group type, either the Any system or the All Systems
link automatically appears.
To learn more about parallel and failover service groups, see the Cluster Server
Administrator's Guide.
If you want to specify the system where you want to stop the application, click
User selected system, and then click the appropriate system.
3 If the application that you want to stop requires other applications or component
groups (service groups) to stop in a specific order, then check the Stop the
dependent components in order check box, and then click OK.
To switch an application
1 In the appropriate row of the application table, click Switch.
2 If you want VCS to decide to which system the application must switch, based
on policies, then in the Switch Application panel, click Any system, and then
click OK.
To learn more about policies, see the Cluster Server Administrator's Guide.
If you want to specify the system where you want to switch the application,
click User selected system, and then click the appropriate system, and then
click OK.
VCS stops the application on the system where the application is running, and
starts it on the system you specified.
Note: The following procedure describes generic steps to add a failover system.
The wizard automatically populates values for initially configured systems in some
fields. These values are not editable.
3 If you want to add a system from the Cluster systems list to the Application
failover targets list, on the Configuration Inputs panel, select the system in the
Cluster systems list. Use the Edit icon to specify an administrative user account
on the system. You can then move the required system from the Cluster system
list to the Application failover targets list. Use the up and down arrow keys to
set the order of systems in which VCS agent must failover applications.
If you want to specify a failover system that is not an existing cluster node, on
the Configuration Inputs panel, click Add System, and in the Add System
dialog box, specify the following details:
System Name or IP Specify the name or IP address of the system that you want
address to add to the VCS cluster.
User name Specify the user name with administrative privileges on the
system.
Use the specified user Click this check box to use the specified user credentials
account on all systems on all the cluster systems.
The wizard validates the details, and the system then appears in the Application
failover target list.
4 If you are adding a failover system from the existing VCS cluster, the Network
Details panel does not appear.
If you are adding a new failover system to the existing cluster, on the Network
Details panel, review the networking parameters used by existing failover
systems. Appropriately modify the following parameters for the new failover
system.
■ To configure links over ethernet, select the adapter for each network
communication link. You must select a different network adapter for each
communication link.
■ To configure links over UDP, specify the required details for each
communication link.
Administering application monitoring from the Veritas High Availability view 456
Administering application monitoring from the Veritas High Availability view
Port Specify a unique port number for each link. You can use
ports in the range 49152 to 65535.
The specified port for a link is used for all the cluster systems
on that link.
Subnet mask Displays the subnet mask to which the specified IP belongs.
Subnet mask Specifies the subnet mask to which the IP address belongs.
NIC For each newly added system, specify the network adaptor that
must host the specified virtual IP.
■ Click Next.
8 On the Finish panel, click Finish. This completes the procedure for adding a
failover system. You can view the system in the appropriate row of the
application table.
Similarly you can also remove a system from the list of application failover targets.
Note: This procedure only removes the system from the list of failover target
systems, not from the VCS cluster. To remove a system from the cluster, use VCS
commands. For details, see the VCS Administrator's Guide.
using the resolve a held-up operation link. When you click the link, VCS appropriately
resets the internal state of any held-up application component. This process prepares
the ground for you to retry the original start or stop operation, or initiate another
operation.
To resolve a held-up operation
1 In the appropriate row of the application table, click More > Resolve a held-up
operation.
2 In the Resolve a held-up operation panel, click the system where you want to
resolve the held-up operation, and then click OK.
Note: You can also select multiple systems, and then click OK.
Note: The following steps deletes all cluster configurations, (including networking
and storage configurations), as well as application-monitoring configurations.
■ On the Title bar of the Veritas High Availability view, click Actions >Unconfigure
VCS cluster.
■ In the Unconfigure VCS Cluster panel, review the Cluster Name and Cluster ID,
and specify the User name and Password of the Cluster administrator, and then
click OK.
service group name to uniquely identify the same application. However, the
service group name, for example OraSG2, may not be intuitive to understand,
or easy to parse while navigating the application table. Moreover, once
configured, you cannot edit the service group name. In contrast, VCS allows
you to modify the display name as required. Note that the Veritas High Availability
tab displays both the display name and the service group name associated with
the configured application.
To modify the application monitoring configuration settings
1 Launch the vSphere Client and from the inventory pane on the left, select the
virtual machine that is a part of the VCS cluster where you have configured
application monitoring.
2 Click Veritas High Availability, to view the Veritas High Availability tab.
3 In the appropriate row of the application table, click More > Modify Settings.
4 In the Modify Settings panel, click to highlight the setting that you want to
modify.
5 In the Value column, enter the appropriate value, and then click OK.
Section 6
Cluster configurations for
disaster recovery
Public Clients
Cluster A Network Redirected Cluster B
Application
Failover
Oracle Oracle
Group Group
Replicated
Data
Separate Separate
Storage Storage
wac wac
Process Process
Cluster 1 Cluster 2
The wac process runs on one system in each cluster and connects with peers in
remote clusters. It receives and transmits information about the status of the cluster,
service groups, and systems. This communication enables VCS to create a
consolidated view of the status of all the clusters configured as part of the global
cluster. The process also manages wide-area heartbeating to determine the health
of remote clusters. The process also transmits commands between clusters and
returns the result to the originating cluster.
VCS provides the option of securing the communication between the wide-area
connectors.
See “ Secure communication in global clusters” on page 471.
Connecting clusters–Creating global clusters 466
VCS global clusters: The building blocks
Heartbeat Icmp (
ClusterList = {priclus
Arguments @Cpriclus =
{"10.209.134.1"
)
command enables you to fail over an application to another cluster when a disaster
occurs.
Note: A cluster assuming authority for a group does not guarantee the group will
be brought online on the cluster. The attribute merely specifies the right to attempt
bringing the service group online in the cluster. The presence of Authority does not
override group settings like frozen, autodisabled, non-probed, and so on, that prevent
service groups from going online.
DNS agent The DNS agent updates the canonical name-mapping in the
domain name server after a wide-area failover.
VCS agents for Volume You can use the following VCS agents for Volume Replicator
Replicator in a VCS global cluster setup:
■ RVG agent
The RVG agent manages the Replicated Volume Group
(RVG). Specifically, it brings the RVG online, monitors
read-write access to the RVG, and takes the RVG offline.
Use this agent when using Volume Replicator for
replication.
■ RVGPrimary agent
The RVGPrimary agent attempts to migrate or take over
a Secondary site to a Primary site following an application
failover. The agent has no actions associated with the
offline and monitor routines.
■ RVGShared agent
The RVGShared agent monitors the RVG in a shared
environment. This is a parallel resource. The RVGShared
agent enables you to configure parallel applications to
use an RVG in a cluster. The RVGShared agent monitors
the RVG in a shared disk group environment.
■ RVGSharedPri agent
The RVGSharedPri agent enables migration and takeover
of a Volume Replicator replicated data set in parallel
groups in a VCS environment. Bringing a resource of type
RVGSharedPri online causes the RVG on the local host
to become a primary if it is not already.
■ RVGLogowner agent
The RVGLogowner agent assigns and unassigns a node
as the logowner in the CVM cluster; this is a failover
resource. The RVGLogowner agent assigns or unassigns
a node as a logowner in the cluster. In a shared disk group
environment, currently only the cvm master node should
be assigned the logowner role.
■ RVGSnapshot agent
The RVGSnapshot agent, used in fire drill service groups,
takes space-optimized snapshots so that applications can
be mounted at secondary sites during a fire drill operation.
See the Veritas InfoScale™ Replication Administrator’s
Guide for more information.
VCS agents for third-party VCS provides agents for other third-party array-based or
replication technologies application-based replication solutions. These agents are
available in the High Availability Agent Pack software.
Cluster A Cluster B
Steward
When all communication links between any two clusters are lost, each cluster
contacts the Steward with an inquiry message. The Steward sends an ICMP ping
to the cluster in question and responds with a negative inquiry if the cluster is running
or with positive inquiry if the cluster is down. The Steward can also be used in
configurations with more than two clusters. VCS provides the option of securing
communication between the Steward process and the wide-area connectors.
See “ Secure communication in global clusters” on page 471.
In non-secure configurations, you can configure the steward process on a platform
that is different to that of the global cluster nodes. Secure configurations have not
been tested for running the steward process on a different platform.
Connecting clusters–Creating global clusters 471
Prerequisites for global clusters
For example, you can run the steward process on a Windows system for a global
cluster running on AIX systems. However, the VCS release for AIX contains the
steward binary for AIX only. You must copy the steward binary for Windows from
the VCS installation directory on a Windows cluster, typically C:\Program
Files\VERITAS\Cluster Server.
A Steward is effective only if there are independent paths from each cluster to the
host that runs the Steward. If there is only one path between the two clusters, you
must prevent split-brain by confirming manually via telephone or some messaging
system with administrators at the remote site if a failure has occurred. By default,
VCS global clusters fail over an application across cluster boundaries with
administrator confirmation. You can configure automatic failover by setting the
ClusterFailOverPolicy attribute to Auto.
The default port for the steward is 14156.
name = WAC
domain = VCS_SERVICES@cluster_uuid
A0. VCS is already installed configured and application VCS documentation &
is set up for HA on s1 Application documentation
A2. Set up data replication (VVR or Third-party) VVR: SFHA Installation Guide &
VVR Administrator’s Guide
on s1 and s2 Third-party: Replication documentation
C2. For VVR: Set up replication resources on s1 and s2 VVR: VCS Agents for VVR Configuration Guide
For Third-party: Install & configure the replication agent Third-party: VCS Agent for <Replication Solution>
Installation and Configuration Guide
on s1 and s2
D1. Make the service groups global on s1 and s2 VCS Administrator’s Guide
Table 16-1 lists the high-level tasks to set up VCS global clusters.
Task Reference
Task A See “Configuring application and replication for global cluster setup”
on page 476.
Task B See “Configuring clusters for global cluster setup” on page 477.
Task C See “Configuring service groups for global cluster setup” on page 484.
Task D See “Configuring a service group as a global service group” on page 488.
See “Installing and configuring VCS at the secondary site” on page 479.
Run the GCO Configuration wizard to create or update the ClusterService group.
The wizard verifies your configuration and validates it for a global cluster setup.
See “About installing a VCS license” on page 92.
To configure global cluster components at the primary site
1 Start the GCO Configuration wizard.
# gcoconfig
The wizard discovers the NIC devices on the local system and prompts you to
enter the device to be used for the global cluster.
2 Specify the name of the device and press Enter.
3 If you do not have NIC resources in your configuration, the wizard asks you
whether the specified NIC will be the public NIC used by all systems.
Enter y if it is the public NIC; otherwise enter n. If you entered n, the wizard
prompts you to enter the names of NICs on all systems.
4 Enter the virtual IP to be used for the global cluster.
You must use either IPv4 or IPv6 address. VCS does not support configuring
clusters that use different Internet Protocol versions in a global cluster.
5 If you do not have IP resources in your configuration, the wizard does the
following:
■ For IPv4 address:
The wizard prompts you for the netmask associated with the virtual IP. The
wizard detects the netmask; you can accept the suggested value or enter
another value.
■ For IPv6 address:
The wizard prompts you for the prefix associated with the virtual IP.
6 The wizard prompts for the values for the network hosts. Enter the values.
7 The wizard starts running commands to create or update the ClusterService
group. Various messages indicate the status of these commands. After running
these commands, the wizard brings the ClusterService group online.
8 Verify that the gcoip resource that monitors the virtual IP address for inter-cluster
communication is online.
For more information, see the Cluster Server Configuration and Upgrade Guide.
2 Establish trust between the clusters.
For example in a VCS global cluster environment with two clusters, perform
the following steps to establish trust between the clusters:
■ On each node of the first cluster, enter the following command:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC;
/opt/VRTSvcs/bin/vcsat setuptrust -b
IP_address_of_any_node_from_the_second_cluster:14149 -s high
The command obtains and displays the security certificate and other details
of the root broker of the second cluster.
If the details are correct, enter y at the command prompt to establish trust.
For example:
Connecting clusters–Creating global clusters 480
Setting up a global cluster
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
/opt/VRTSvcs/bin/vcsat setuptrust -b
IP_address_of_any_node_from_the_first_cluster:14149 -s high
The command obtains and displays the security certificate and other details
of the root broker of the first cluster.
If the details are correct, enter y at the command prompt to establish trust.
Alternatively, if you have passwordless communication set up on the cluster,
you can use the installvcs -securitytrust option to set up trust with
a remote cluster.
3 ■ Skip the remaining steps in this procedure if you used the installvcs
-security command after the global cluster was set up.
■ Complete the remaining steps in this procedure if you had a secure cluster
and then used the gcoconfig command.
On each cluster, take the wac resource offline on the node where the wac
resource is online. For each cluster, run the following command:
# haconf -makerw
hares -modify wac StartProgram \
"/opt/VRTSvcs/bin/wacstart -secure"
hares -modify wac MonitorProcesses \
"/opt/VRTSvcs/bin/wac -secure"
haconf -dump -makero
5 On each cluster, bring the wac resource online. For each cluster, run the
following command on any node:
To configure the Steward process for clusters not running in secure mode
1 Identify a system that will host the Steward process.
2 Make sure that both clusters can connect to the system through a ping
command.
3 Copy the file steward from a node in the cluster to the Steward system. The
file resides at the following path:
/opt/VRTSvcs/bin/
4 In both clusters, set the Stewards attribute to the IP address of the system
running the Steward process.
For example:
cluster cluster1938 (
UserNames = { admin = gNOgNInKOjOOmWOiNL }
ClusterAddress = "10.182.147.19"
Administrators = { admin }
CredRenewFrequency = 0
CounterInterval = 5
Stewards = {"10.212.100.165"}
}
5 On the system designated to host the Steward, start the Steward process:
# steward -start
■ On the system that is designated to run the Steward process, run the
installvcs -securityonenode command.
The installer prompts for a confirmation if VCS is not configured or if VCS
is not running on all nodes of the cluster. Enter y when the installer prompts
whether you want to continue configuring security.
For more information about the -securityonenode option, see the Cluster
Server Configuration and Upgrade Guide.
# unset EAT_DATA_DIR
# unset EAT_HOME_DIR
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat createpd -d
VCS_SERVICES -t ab
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addprpl -t ab
-d VCS_SERVICES -p STEWARD -s password
# mkdir -p /var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
8 In both the clusters, set the Stewards attribute to the IP address of the system
running the Steward process.
For example:
cluster cluster1938 (
UserNames = { admin = gNOgNInKOjOOmWOiNL }
ClusterAddress = "10.182.147.19"
Administrators = { admin }
CredRenewFrequency = 0
CounterInterval = 5
Stewards = {"10.212.100.165"}
}
9 On the system designated to run the Steward, start the Steward process:
# steward -stop
To stop the Steward process running in secure mode, open a new command
window and run the following command:
■ You can do this by either using Cluster Manager (Java Console) to copy
and paste resources from the primary cluster, or by copying the configuration
from the main.cf file in the primary cluster to the secondary cluster.
■ Make appropriate changes to the configuration. For example, you must
modify the SystemList attribute to reflect the systems in the secondary
cluster.
■ Make sure that the name of the service group (appgroup) is identical in
both clusters.
6 Click Next.
7 Enter or review the connection details for each cluster.
Click the Configure icon to review the remote cluster information for each
cluster.
8 Enter the IP address of the remote cluster, the IP address of a cluster system,
or the host name of a cluster system.
9 Enter the user name and the password for the remote cluster and click OK.
10 Click Next.
11 Click Finish.
12 Save the configuration.
The appgroup service group is now a global group and can be failed over
between clusters.
Connecting clusters–Creating global clusters 489
About cluster faults
To switch the service group when the primary site has failed and the secondary
did a takeover
1 In the Service Groups tab of the configuration tree, right-click the resource.
2 Click Actions.
3 Specify the details of the action:
■ From the Action list, choose fbsync.
■ Click the system on which to execute the action.
■ Click OK.
This begins a fast-failback of the replicated data set. You can monitor the value
of the ResourceInfo attribute for the RVG resource to determine when the
resynchronization has completed.
4 Once the resynchronization completes, switch the service group to the primary
cluster.
■ In the Service Groups tab of the Cluster Explorer configuration tree,
right-click the service group.
■ Click Switch To, and click Remote switch.
■ In the Switch global group dialog box, click the cluster to switch the group.
Click the specific system, or click Any System, and click OK.
You can override the FireDrill attribute and make fire drill resource-specific.
Before you perform a fire drill in a disaster recovery setup that uses Volume
Replicator, perform the following steps:
■ Configure the fire drill service group.
See “About creating and configuring the fire drill service group manually”
on page 491.
See “About configuring the fire drill service group using the Fire Drill Setup
wizard” on page 494.
If you configure the fire drill service group manually using the command line or
the Cluster Manager (Java Console), set an offline local dependency between
the fire drill service group and the application service group to make sure a fire
drill does not block an application failover in case a disaster strikes the primary
site. If you use the Fire Drill Setup (fdsetup) wizard, the wizard creates this
dependency.
■ Set the value of the ReuseMntPt attribute to 1 for all Mount resources.
■ After the fire drill service group is taken offline, reset the value of the ReuseMntPt
attribute to 0 for all Mount resources.
VCS also supports HA fire drills to verify a resource can fail over to another node
in the cluster.
See “About testing resource failover by using HA fire drills” on page 211.
Note: You can conduct fire drills only on regular VxVM volumes; volume sets (vset)
are not supported.
VCS provides hardware replication agents for array-based solutions, such as Hitachi
Truecopy, EMC SRDF, and so on . If you are using hardware replication agents to
monitor the replicated data clusters, refer to the VCS replication agent documentation
for details on setting up and configuring fire drill.
About creating and configuring the fire drill service group manually
You can create the fire drill service group using the command line or Cluster
Manager (Java Console.) The fire drill service group uses the duplicated copy of
the application data.
Creating and configuring the fire drill service group involves the following tasks:
■ See “Creating the fire drill service group” on page 492.
■ See “Linking the fire drill and replication service groups” on page 493.
■ See “Adding resources to the fire drill service group” on page 493.
Connecting clusters–Creating global clusters 492
About setting up a disaster recovery fire drill
About configuring the fire drill service group using the Fire Drill Setup
wizard
Use the Fire Drill Setup Wizard to set up the fire drill configuration.
The wizard performs the following specific tasks:
■ Creates a Cache object to store changed blocks during the fire drill, which
minimizes disk space and disk spindles required to perform the fire drill.
Connecting clusters–Creating global clusters 495
About setting up a disaster recovery fire drill
■ Configures a VCS service group that resembles the real application group.
The wizard works only with application groups that contain one disk group. The
wizard sets up the first RVG in an application. If the application has more than one
RVG, you must create space-optimized snapshots and configure VCS manually,
using the first RVG as reference.
You can schedule the fire drill for the service group using the fdsched script.
See “Scheduling a fire drill” on page 497.
# /opt/VRTSvcs/bin/fdsetup
2 Read the information on the Welcome screen and press the Enter key.
3 The wizard identifies the global service groups. Enter the name of the service
group for the fire drill.
4 Review the list of volumes in disk group that could be used for a
space-optimized snapshot. Enter the volumes to be selected for the snapshot.
Typically, all volumes used by the application, whether replicated or not, should
be prepared, otherwise a snapshot might not succeed.
Press the Enter key when prompted.
Connecting clusters–Creating global clusters 496
About setting up a disaster recovery fire drill
5 Enter the cache size to store writes when the snapshot exists. The size of the
cache must be large enough to store the expected number of changed blocks
during the fire drill. However, the cache is configured to grow automatically if
it fills up. Enter disks on which to create the cache.
Press the Enter key when prompted.
6 The wizard starts running commands to create the fire drill setup.
Press the Enter key when prompted.
The wizard creates the application group with its associated resources. It also
creates a fire drill group with resources for the application (Oracle, for example),
the Mount, and the RVGSnapshot types.
The application resources in both service groups define the same application,
the same database in this example. The wizard sets the FireDrill attribute for
the application resource to 1 to prevent the agent from reporting a concurrency
violation when the actual application instance and the fire drill service group
are online at the same time.
Remember to take the fire drill offline once its functioning has been validated. Failing
to take the fire drill offline could cause failures in your environment. For example,
if the application service group were to fail over to the node hosting the fire drill
service group, there would be resource conflicts, resulting in both service groups
faulting.
Stockton Denver
Just as a two-tier, two-site environment is possible, you can also tie a three-tier
environment together.
Figure 16-6 represents a two-site, three-tier environment. The application cluster,
which is globally clustered between L.A. and Denver, has cluster dependencies up
and down the tiers. Cluster 1 (C1), depends on the RemoteGroup resource on the
DB tier for cluster 3 (C3), and then on the remote service group for cluster 5 (C5).
The stack for C2, C4, and C6 functions the same.
Connecting clusters–Creating global clusters 499
Test scenario for a multi-tiered environment
Stockton Denver
Site A Site B
include "types.cf"
cluster C1 (
ClusterAddress = "10.182.10.145"
)
remotecluster C2 (
ClusterAddress = "10.182.10.146"
)
heartbeat Icmp (
ClusterList = { C2 }
AYATimeout = 30
Arguments @C2 = { "10.182.10.146" }
Connecting clusters–Creating global clusters 501
Test scenario for a multi-tiered environment
system sysA (
)
system sysB (
)
group LSG (
SystemList = { sysA = 0, sysB = 1 }
ClusterList = { C2 = 0, C1 = 1 }
AutoStartList = { sysA, sysB }
ClusterFailOverPolicy = Auto
)
FileOnOff filec1 (
PathName = "/tmp/c1"
)
RemoteGroup RGR (
IpAddress = "10.182.6.152"
// The above IPAddress is the highly available address of C3—
// the same address that the wac uses
Username = root
Password = xxxyyy
GroupName = RSG
VCSSysName = ANY
ControlMode = OnOff
)
include "types.cf"
cluster C2 (
ClusterAddress = "10.182.10.146"
)
remotecluster C1 (
ClusterAddress = "10.182.10.145"
Connecting clusters–Creating global clusters 502
Test scenario for a multi-tiered environment
heartbeat Icmp (
ClusterList = { C1 }
AYATimeout = 30
Arguments @C1 = { "10.182.10.145" }
)
system sysC (
)
system sysD (
)
group LSG (
SystemList = { sysC = 0, sysD = 1 }
ClusterList = { C2 = 0, C1 = 1 }
Authority = 1
AutoStartList = { sysC, sysD }
ClusterFailOverPolicy = Auto
)
FileOnOff filec2 (
PathName = filec2
)
RemoteGroup RGR (
IpAddress = "10.182.6.154"
// The above IPAddress is the highly available address of C4—
// the same address that the wac uses
Username = root
Password = vvvyyy
GroupName = RSG
VCSSysName = ANY
ControlMode = OnOff
)
include "types.cf"
cluster C3 (
ClusterAddress = "10.182.6.152"
)
remotecluster C4 (
ClusterAddress = "10.182.6.154"
)
heartbeat Icmp (
ClusterList = { C4 }
AYATimeout = 30
Arguments @C4 = { "10.182.6.154" }
)
system sysW (
)
system sysX (
)
group RSG (
SystemList = { sysW = 0, sysX = 1 }
ClusterList = { C3 = 1, C4 = 0 }
AutoStartList = { sysW, sysX }
ClusterFailOverPolicy = Auto
)
FileOnOff filec3 (
PathName = "/tmp/filec3"
)
include "types.cf"
cluster C4 (
ClusterAddress = "10.182.6.154"
)
Connecting clusters–Creating global clusters 504
Test scenario for a multi-tiered environment
remotecluster C3 (
ClusterAddress = "10.182.6.152"
)
heartbeat Icmp (
ClusterList = { C3 }
AYATimeout = 30
Arguments @C3 = { "10.182.6.152" }
)
system sysY (
)
system sysZ (
)
group RSG (
SystemList = { sysY = 0, sysZ = 1 }
ClusterList = { C3 = 1, C4 = 0 }
Authority = 1
AutoStartList = { sysY, sysZ }
ClusterFailOverPolicy = Auto
)
FileOnOff filec4 (
PathName = "/tmp/filec4"
)
Chapter 17
Administering global
clusters from the
command line
This chapter includes the following topics:
The option -clus displays the attribute value on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
If the attribute has local scope, you must specify the system name, except
when querying the attribute on the system from which you run the command.
Administering global clusters from the command line 507
About global querying in a global cluster setup
The option -clus displays the state of all service groups on a cluster designated
by the variable cluster; the option -localclus specifies the local cluster.
To display service group information across clusters
◆ Use the following command to display service group information across clusters:
The option -clus applies to global groups only. If the group is local, the cluster
name must be the local cluster name, otherwise no information is displayed.
To display service groups in a cluster
◆ Use the following command to display service groups in a cluster:
The option -clus lists all service groups on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
To display usage for the service group command
◆ Use the following command to display usage for the service group command:
The option -clus displays the attribute value on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
If the attribute has local scope, you must specify the system name, except
when querying the attribute on the system from which you run the command.
To display the state of a resource across clusters
◆ Use the following command to display the state of a resource across clusters:
The option -clus displays the state of all resources on the specified cluster;
the option -localclus specifies the local cluster. Specifying a system displays
resource state on a particular system.
To display resource information across clusters
◆ Use the following command to display resource information across clusters:
The option -clus lists all service groups on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
To display a list of resources across clusters
◆ Use the following command to display a list of resources across clusters:
The option -clus lists all resources that meet the specified conditions in global
service groups on a cluster as designated by the variable cluster.
To display usage for the resource command
◆ Use the following command to display usage for the resource command:
Querying systems
This topic describes how to perform queries on systems:
To display system attribute values across clusters
◆ Use the following command to display system attribute values across clusters:
The option -clus displays the values of a system attribute in the cluster as
designated by the variable cluster; the option -localclus specifies the local
cluster.
To display the state of a system across clusters
◆ Use the following command to display the state of a system across clusters:
Displays the current state of the specified system. The option -clus displays
the state in a cluster designated by the variable cluster; the option -localclus
specifies the local cluster. If you do not specify a system, the command displays
the states of all systems.
For information about each system across clusters
◆ Use the following command to display information about each system across
clusters:
The option -clus displays the attribute values on systems (if specified) in a
cluster designated by the variable cluster; the option -localclus specifies the
local cluster.
For a list of systems across clusters
◆ Use the following command to display a list of systems across clusters:
Displays a list of systems whose values match the given conditional statements.
The option -clus displays the systems in a cluster designated by the variable
cluster; the option -localclus specifies the local cluster.
Querying clusters
This topic describes how to perform queries on clusters:
Administering global clusters from the command line 510
About global querying in a global cluster setup
The attribute must be specified in this command. If you do not specify the
cluster name, the command displays the attribute value on the local cluster.
To display the state of a local or remote cluster
◆ Use the following command to display the state of a local or remote cluster:
The variable cluster represents the cluster. If a cluster is not specified, the state
of the local cluster and the state of all remote cluster objects as seen by the
local cluster are displayed.
For information on the state of a local or remote cluster
◆ Use the following command for information on the state of a local or remote
cluster:
Lists the clusters that meet the specified conditions, beginning with the local
cluster.
To display usage for the cluster command
◆ Use the following command to display usage for the cluster command:
Querying status
This topic describes how to perform queries on status of remote and local clusters:
For the status of local and remote clusters
◆ Use the following command to obtain the status of local and remote clusters:
hastatus
Querying heartbeats
The hahb command is used to manage WAN heartbeats that emanate from the
local cluster. Administrators can monitor the "health of the remote cluster via
heartbeat commands and mechanisms such as Internet, satellites, or storage
replication technologies. Heartbeat commands are applicable only on the cluster
from which they are issued.
Note: You must have Cluster Administrator privileges to add, delete, and modify
heartbeats.
The variable conditionals represents the conditions that must be met for the
heartbeat to be listed.
Administering global clusters from the command line 512
About global querying in a global cluster setup
For example, to get the state of heartbeat Icmp from the local cluster to the
remote cluster phoenix:
The -value option provides the value of a single attribute for a specific
heartbeat. The cluster name must be specified for cluster-specific attribute
values, but not for global.
For example, to display the value of the ClusterList attribute for heartbeat Icmp:
If the -modify option is specified, the usage for the hahb -modify option is
displayed.
Administering global clusters from the command line 513
Administering global service groups in a global cluster setup
The option -nopre indicates that the VCS engine must switch
the service group regardless of the value of the PreSwitch service
group attribute.
The -any option specifies that the VCS engine switches a service
group to the best possible system on which it is currently not online,
based on the value of the group's FailOverPolicy attribute. The
VCS engine switches a global service group from a system to
another system in the local cluster or a remote cluster.
If you do not specify the -clus option, the VCS engine by default
assumes -localclus option and selects an available system
within the local cluster.
The option -clus identifies the remote cluster to which the service
group will be switched. The VCS engine then selects the target
system on which to switch the service group.
To change the cluster See “Changing the cluster name in a global cluster setup”
name on page 517.
haalert -display For each alert, the command displays the following
information:
■ alert ID
■ time when alert occurred
■ cluster on which alert occurred
■ object name for which alert occurred
■ (cluster name, group name, and so on).
■ informative message about alert
haalert -list For each alert, the command displays the following
information:
haalert -delete Deletes a specific alert. You must enter a text message
alert_id -notes within quotes describing the reason for deleting the
"description" alert. The comment is written to the engine log as well
as sent to any connected GUI clients.
2 Remove the remote cluster from the ClusterList of all the global groups using
the following command:
3 Remove the remote cluster from the ClusterList of all the heartbeats using the
following command:
If the attribute is local, that is, it has a separate value for each
remote cluster in the ClusterList attribute, the option -clus
cluster must be specified. Use -delete -keys to clear the
value of any list attributes.
Note: VVR supports multiple replication secondary targets for any given primary.
However, RDC for VCS supports only one replication secondary for a primary.
Note: You must use dual dedicated LLT links between the replicated nodes.
Public Clients
Zone 0 Network Redirected Zone 1
Private Network
Service Service
Group Group
Application
Failover
Replicated
Data
Separate Separate
Storage Storage
In the event of a system or application failure, VCS attempts to fail over the
application service group to another system within the same RDC site. However,
in the event that VCS fails to find a failover target node within the primary RDC site,
VCS switches the service group to a node in the current secondary RDC site (zone
1).
Application Group
Application
Mount DNS
RVGPrimary IP
NIC
DiskGroup IP
NIC
6 Set the SystemZones attribute of the child group, oragrp_rep, such that all
nodes in the primary RDC zone are in system zone 0 and all nodes in the
secondary RDC zone are in system zone 1.
To configure the application service group
1 In the original Oracle service group (oragroup), delete the DiskGroup resource.
2 Add an RVGPrimary resource and configure its attributes.
Set the value of the RvgResourceName attribute to the name of the RVG type
resource that will be promoted and demoted by the RVGPrimary agent.
Set the AutoTakeover and AutoResync attributes from their defaults as desired.
3 Set resource dependencies such that all Mount resources depend on the
RVGPrimary resource. If there are a lot of Mount resources, you can set the
TypeDependencies attribute for the group to denote that the Mount resource
type depends on the RVGPRimary resource type.
Setting up replicated data clusters 525
About migrating a service group
4 Set the SystemZones attribute of the Oracle service group such that all nodes
in the primary RDC zone are in system zone 0 and all nodes in the secondary
RDC zone are in zone 1. The SystemZones attribute of both the parent and
the child group must be identical.
5 If your setup uses BIND DNS, add a resource of type DNS to the oragroup
service group. Set the Hostname attribute to the canonical name of the host
or virtual IP address that the application uses on that cluster. This ensures
DNS updates to the site when the group is brought online. A DNS resource
would be necessary only if the nodes in the primary and the secondary RDC
zones are in different IP subnets.
If the storage problem is corrected, you can switch the application to the primary
site using VCS.
■ You must install Veritas Volume Manager with the FMR license and the Site
Awareness license.
■ Veritas recommends that you configure I/O fencing to prevent data corruption
in the event of link failures.
See the Cluster Server Configuration and Upgrade Guide for more details.
■ You must configure storage to meet site-based allocation and site-consistency
requirements for VxVM.
■ All the nodes in the site must be tagged with the appropriate VxVM site
names.
■ All the disks must be tagged with the appropriate VxVM site names.
■ The VxVM site names of both the sites in the campus cluster must be added
to the disk groups.
■ The allsites attribute for each volume in the disk group must be set to on.
(By default, the value is set to on.)
■ The siteconsistent attribute for the disk groups must be set to on.
Setting up campus clusters 529
Typical VCS campus cluster setup
Public network
Private network
Switch Switch
Site C
Switch
Disk array
Coordination point
2 VCS fails over the service group if another suitable node exists in the
same site. Otherwise, VCS waits for administrator intervention to initiate
the service group failover to a suitable node in the other site.
Sample definition for these service group attributes in the VCS main.cf is as follows:
cluster VCS_CLUS (
PreferredFencingPolicy = Site
SiteAware = 1
)
site MTV (
SystemList = { sys1, sys2 }
)
site SFO (
Preference = 2
SystemList = { sys3, sys4 }
)
hybrid_group (
Parallel = 2
SystemList = { sys1 = 0, sys2 = 1, sys3 = 2, sys4 = 3 }
)
failover_group (
AutoFailover = 2
SystemList = { sys1 = 0, sys2 = 1, sys3 = 2, sys4 = 3 }
)
Table 19-1 lists the possible failure scenarios and how VCS campus cluster recovers
from these failures.
Setting up campus clusters 532
How VCS campus clusters work
Storage failure - VCS does not fail over the service group when such a storage failure
one or more disks occurs.
at a site fails
VxVM detaches the site from the disk group if any volume in that disk
group does not have at least one valid plex at the site where the disks
failed.
VxVM does not detach the site from the disk group in the following
cases:
If only some of the disks that failed come online and if the vxrelocd
daemon is running, VxVM relocates the remaining failed disks to any
available disks. Then, VxVM automatically reattaches the site to the
disk group and resynchronizes the plexes to recover the volumes.
If all the disks that failed come online, VxVM automatically reattaches
the site to the disk group and resynchronizes the plexes to recover the
volumes.
Storage failure - all VCS acts based on the DiskGroup agent's PanicSystemOnDGLoss
disks at both sites attribute value.
fail
See the Cluster Server Bundled Agents Reference Guide for more
information.
Setting up campus clusters 533
How VCS campus clusters work
■ If the value is set to 1, VCS fails over the service group to a system.
■ If the value is set to 2, VCS requires administrator intervention to
initiate the service group failover to a system in the other site.
Because the storage at the failed site is inaccessible, VCS imports the
disk group in the application service group with all devices at the failed
site marked as NODEVICE.
When the storage at the failed site comes online, VxVM automatically
reattaches the site to the disk group and resynchronizes the plexes to
recover the volumes.
Network failure Nodes at each site lose connectivity to the nodes at the other site
(LLT interconnect
The failure of all private interconnects between the nodes can result in
failure)
split brain scenario and cause data corruption.
Review the details on other possible causes of split brain and how I/O
fencing protects shared data from corruption.
Network failure Nodes at each site lose connectivity to the storage and the nodes at
(LLT and storage the other site
interconnect Veritas recommends that you configure I/O fencing to prevent split brain
failure) and serial split brain conditions.
Note: You can also configure VxVM disk groups for remote mirroring using Veritas
InfoScale Operations Manager.
Setting up campus clusters 537
About setting up a campus cluster configuration
The site name is stored in the /etc/vx/volboot file. Use the following command
to display the site names:
# vxdisk listtag
5 Configure site-based allocation on the disk group that you created for each
site that is registered to the disk group.
With the Site Awareness license installed on all hosts, the volume that you
create has the following characteristics by default:
■ The allsites attribute is set to on; the volumes have at least one plex at
each site.
Setting up campus clusters 538
Fire drill in campus clusters
2 Set up the system zones or sites. Configure the SystemZones attribute for the
service group. Skip this step when sites are configured through Veritas InfoScale
Operations Manager.
3 Set up the group fail over policy. Set the value of the AutoFailOver attribute
for the service group.
4 For the disk group you created for campus clusters, add a DiskGroup resource
to the VCS service group app_sg.
5 Configure the application and other related resources to the app_sg service
group.
6 Bring the service group online.
The process involves creating a fire drill service group, which is similar to the original
application service group. Bringing the fire drill service group online on the remote
node demonstrates the ability of the application service group to fail over and come
online at the site, should the need arise.
Fire drill service groups do not interact with outside clients or with other instances
of resources, so they can safely come online even when the application service
group is online. Conduct a fire drill only at the remote site; do not bring the fire drill
service group online on the node hosting the original application.
Note: To perform fire drill, the application service group must be online at the primary
site.
Note: For the applications for which you want to perform fire drill, you must set the
value of the FireDrill attribute for those application resource types to 1. After you
complete fire drill, reset the value to 0.
Warning: You must take the fire drill service group offline after you complete
the fire drill so that the failover behavior of the application service group is not
impacted. Otherwise, when a disaster strikes at the primary site, the application
service group cannot fail over to the secondary site due to resource conflicts.
4 After you complete the fire drill, take the fire drill service group offline.
5 Reset the Firedrill attribute for the resource to 0.
6 Undo or override the Firedrill attribute from the resource level.
Section 7
Troubleshooting and
performance
The number of clients connected to VCS can affect performance if several events
occur simultaneously. For example, if five GUI processes are connected to VCS,
VCS sends state updates to all five. Maintaining fewer client connections to VCS
reduces this overhead.
VCS performance considerations 545
How cluster components affect performance
You can also adjust how often VCS monitors various functions by modifying their
associated attributes. The attributes MonitorTimeout, OnlineTimeOut, and
OfflineTimeout indicate the maximum time (in seconds) within which the monitor,
online, and offline functions must complete or else be terminated. The default for
the MonitorTimeout attribute is 60 seconds. The defaults for the OnlineTimeout and
OfflineTimeout attributes is 300 seconds. For best results, Veritas recommends
measuring the time it takes to bring a resource online, take it offline, and monitor
before modifying the defaults. Issue an online or offline command to measure the
time it takes for each action. To measure how long it takes to monitor a resource,
fault the resource and issue a probe, or bring the resource online outside of VCS
control and issue a probe.
Agents typically run with normal priority. When you develop agents, consider the
following:
■ If you write a custom agent, write the monitor function using C or C++. If you
write a script-based monitor, VCS must invoke a new process each time with
the monitor. This can be costly if you have many resources of that type.
■ If monitoring the resources is proving costly, you can divide it into cursory, or
shallow monitoring, and the more extensive deep (or in-depth) monitoring.
Whether to use shallow or deep monitoring depends on your configuration
requirements.
As an additional consideration for agents, properly configure the attribute SystemList
for your service group. For example, if you know that a service group can go online
on SystemA and SystemB only, do not include other systems in the SystemList.
This saves additional agent processes and monitoring overhead.
A cluster system boots See “ VCS performance consideration when booting a cluster
system” on page 547.
VCS performance considerations 547
How cluster operations affect performance
A resource goes offline See “ VCS performance consideration when a resource goes
offline” on page 548.
A service group comes online See “VCS performance consideration when a service group
comes online” on page 548.
A service group goes offline See “VCS performance consideration when a service group
goes offline” on page 549.
A network link fails See “ VCS performance consideration when a network link
fails” on page 551.
A service group switches over See “ VCS performance consideration when a service group
switches over” on page 553.
A service group fails over See “ VCS performance consideration when a service group
fails over” on page 554.
Note: Bringing service groups online as part of AutoStart occurs after VCS transitions
to RUNNING mode.
complex service group trees do not allow much parallelism and serializes the group
online operation.
set-timer peerinact:1200
Note: After modifying the peer inactive timeout, you must unconfigure, then restart
LLT before the change is implemented. To unconfigure LLT, type lltconfig -U.
To restart LLT, type lltconfig -c.
gabconfig -t timeout_value_milliseconds
Though this can be done, we do not recommend changing the values of the LLT
peer inactive timeout and GAB stable timeout.
If a system boots, it becomes unavailable until the reboot is complete. The reboot
process kills all processes, including HAD. When the VCS process is killed, other
systems in the cluster mark all service groups that can go online on the rebooted
system as autodisabled. The AutoDisabled flag is cleared when the system goes
offline. As long as the system goes offline within the interval specified in the
ShutdownTimeout value, VCS treats this as a system reboot. You can modify the
default value of the ShutdownTimeout attribute.
See System attributes on page 707.
VCS performance considerations 551
How cluster operations affect performance
set-timer peerinact:1200
Note: After modifying the peer inactive timeout, you must unconfigure, then restart
LLT before the change is implemented. To unconfigure LLT, type lltconfig -U.
To restart LLT, type lltconfig -c.
HAD heartbeats with GAB at regular intervals. The heartbeat timeout is specified
by HAD when it registers with GAB; the default is 30 seconds. If HAD gets stuck
within the kernel and cannot heartbeat with GAB within the specified timeout, GAB
tries to kill HAD by sending a SIGABRT signal. If it does not succeed, GAB sends
a SIGKILL and closes the port.
By default, GAB tries to kill HAD five times before closing the port. The number of
times GAB tries to kill HAD is a kernel tunable parameter, gab_kill_ntries, and is
configurable. The minimum value for this tunable is 3 and the maximum is 10.
Port closure is an indication to other nodes that HAD on this node has been killed.
Should HAD recover from its stuck state, it first processes pending signals. Here it
receive the SIGKILL first and get killed.
After GAB sends a SIGKILL signal, it waits for a specific amount of time for HAD
to get killed. If HAD survives beyond this time limit, GAB panics the system. This
time limit is a kernel tunable parameter, gab_isolate_time, and is configurable. The
minimum value for this timer is 16 seconds and maximum is 4 minutes.
VCS_GAB_RMACTION=panic
VCS performance considerations 553
How cluster operations affect performance
In this configuration, killing the HAD and hashadow processes results in a panic
unless you start HAD within the registration monitoring timeout interval.
■ To configure GAB to log a message in this situation, set:
VCS_GAB_RMACTION=SYSLOG
The default value of this parameter is SYSLOG, which configures GAB to log a
message when HAD does not reconnect after the specified time interval.
In this scenario, you can choose to restart HAD (using hastart) or unconfigure
GAB (using gabconfig -U).
When you enable registration monitoring, GAB takes no action if the HAD process
unregisters with GAB normally, that is if you stop HAD using the hastop command.
Note: For standard configurations, Veritas recommends using the default values
for scheduling unless specific configuration requirements dictate otherwise.
Note that the default priority value is platform-specific. When priority is set to ""
(empty string), VCS converts the priority to a value specific to the platform on which
the system is running. For TS, the default priority equals the strongest priority
supported by the TimeSharing class. For RT, the default priority equals two less
than the strongest priority supported by the RealTime class. So, if the strongest
priority supported by the RealTime class is 59, the default priority for the RT class
is 57.
VCS performance considerations 556
CPU binding of HAD
llt_maxnids=32
llt_maxports=32
llt_nominpad=0
llt_basetimer=50000
llt_thread_pri=40
Table 20-3 depicts the priorities for LLT threads for AIX.
Note that the priorities must lie within the priority range. If you specify a priority
outside the range, LLT resets it to a value within the range.
To specify priorities for LLT threads
◆ Specify priorities for these tunables by setting the following entries in the file
/etc/llt.conf:
llt_thread_pri=<priority value>
Note that these changes take effect when the LLT module reloads.
To overcome this issue, VCS provides the option of running HAD on a specific
logical processor. VCS disables all interrupts on that logical processor. Configure
HAD to run on a specific logical processor by setting the CPUBinding attribute.
Note: CPU binding of HAD is supported only on AIX 6.1 TL6 or later and AIX 7.1.
where:
■ The value NONE indicates that HAD does not use CPU binding.
■ The value ANY indicates that HAD binds to any available logical CPU.
■ The value CPUNUM indicates that HAD binds to the logical CPU specified in
the CPUNumber attribute.
The variable number specifies the serial number of the logical CPU.
Warning: Veritas recommends that you modify CPUBinding on the local system.
If you specify an invalid processor number when you try to bind HAD to a
processor on remote system, HAD does not bind to any CPU. The command
displays no error that the CPU does not exist.
Note: You cannot use the -add, -update, or -delete [-keys] options for the hasys
-modify command to modify the CPUBinding attribute.
In certain scenarios, when HAD is killed or is not running, the logical CPU interrupts
might remain disabled. However, you can enable the interrupts manually. To check
for the logical processor IDs that have interrupts disabled, run the following
command:
# cpuextintr_ctl -q disable
average by more than this threshold value, VCS regards this as a sign of gradual
increase or decrease in monitor cycle times and sends a notification about it for
the resource. Whenever such an event occurs, VCS resets the internally
maintained benchmark average to this new average. VCS sends notifications
regardless of whether the deviation is an increase or decrease in the monitor
cycle time.
For example, a value of 25 means that if the actual average monitor time is 25%
more than the benchmark monitor time average, VCS sends a notification.
MonitorTimeStats Stores the average time taken by a number of monitor cycles specified
by the Frequency attribute along with a timestamp value of when the
average was computed.
ComputeStats A flag that specifies whether VCS keeps track of the monitor times for
the resource.
boolean ComputeStats = 0
The value 0 indicates that VCS will not keep track of the time taken by
the monitor routine for the resource. The value 1 indicates that VCS
keeps track of the monitor time for the resource.
The default value for this attribute is 0.
the user not change the tunable kernel parameters without assistance from Veritas
Technologies support personnel. Several of the tunable parameters preallocate
memory for critical data structures, and a change in their values could increase
memory use or degrade performance.
Warning: Do not adjust the VCS tunable parameters for kernel modules such as
VXFEN without assistance from Veritas Technologies support personnel.
peerinact LLT marks a link of a peer 3200 ■ Change this value for The timer value should
node as “inactive," if it does delaying or speeding up always be higher than the
not receive any packet on that node/link inactive peertrouble timer value.
link for this timer interval. notification mechanism as
Once a link is marked as per client’s notification
"inactive," LLT will not send processing logic.
any data on that link. ■ Increase the value for
planned replacement of
faulty network cable
/switch.
■ In some circumstances,
when the private networks
links are very slow or the
network traffic becomes
very bursty, increase this
value so as to avoid false
notifications of peer death.
Set the value to a high
value for planned
replacement of faulty
network cable or faulty
switch.
peertrouble LLT marks a high-pri link of a 200 ■ In some circumstances, This timer value should
peer node as "troubled", if it when the private networks always be lower than
does not receive any packet links are very slow or peerinact timer value. Also, It
on that link for this timer nodes in the cluster are should be close to its default
interval. Once a link is very busy, increase the value.
marked as "troubled", LLT will value.
not send any data on that link ■ Increase the value for
till the link is up. planned replacement of
faulty network cable /faulty
switch.
VCS performance considerations 563
About VCS tunable parameters
peertroublelo LLT marks a low-pri link of a 400 ■ In some circumstances, This timer value should
peer node as "troubled", if it when the private networks always be lower than
does not receive any packet links are very slow or peerinact timer value. Also, It
on that link for this timer nodes in the cluster are should be close to its default
interval. Once a link is very busy, increase the value.
marked as "troubled", LLT will value.
not send any data on that link ■ Increase the value for
till the link is available. planned replacement of
faulty network cable /faulty
switch.
heartbeat LLT sends heartbeat packets 50 In some circumstances, when This timer value should be
repeatedly to peer nodes after the private networks links are lower than peertrouble timer
every heartbeat timer interval very slow (or congested) or value. Also, it should not be
on each highpri link. nodes in the cluster are very close to peertrouble timer
busy, increase the value. value.
heartbeatlo LLT sends heartbeat packets 100 In some circumstances, when This timer value should be
repeatedly to peer nodes after the networks links are very lower than peertroublelo timer
every heartbeatlo timer slow or nodes in the cluster value. Also, it should not be
interval on each low pri link. are very busy, increase the close to peertroublelo timer
value. value.
timetoreqhb If LLT does not receive any 3000 Decrease the value of this This timer is set to ‘peerinact
packet from the peer node on tunable for speeding up - 200’ automatically every
a particular link for node/link inactive notification time when the peerinact timer
"timetoreqhb" time period, it mechanism as per client’s is changed.
attempts to request notification processing logic.
heartbeats (sends 5 special
Disable the request heartbeat
heartbeat requests (hbreqs)
mechanism by setting the
to the peer node on the same
value of this timer to 0 for
link) from the peer node. If the
planned replacement of faulty
peer node does not respond
network cable /switch.
to the special heartbeat
requests, LLT marks the link In some circumstances, when
as “expired” for that peer the private networks links are
node. The value can be set very slow or the network
from the range of 0 to traffic becomes very bursty,
(peerinact -200). The value 0 don’t change the value of this
disables the request timer tunable.
heartbeat mechanism.
VCS performance considerations 564
About VCS tunable parameters
reqhbtime This value specifies the time 40 Veritas does not recommend Not applicable
interval between two to change this value
successive special heartbeat
requests. See the
timetoreqhb parameter for
more information on special
heartbeat requests.
timetosendhb LLT sends out of timer 200 Disable the out of timer This timer value should not
context heartbeats to keep context heart-beating be more than peerinact timer
the node alive when LLT mechanism by setting the value. Also, it should not be
timer does not run at regular value of this timer to 0 for close to the peerinact timer
interval. This option specifies planned replacement of faulty value.
the amount of time to wait network cable /switch.
before sending a heartbeat in
In some circumstances, when
case of timer not running.
the private networks links are
If this timer tunable is set to very slow or nodes in the
0, the out of timer context cluster are very busy,
heartbeating mechanism is increase the value
disabled.
sendhbcap This value specifies the 18000 Veritas does not recommend NA
maximum time for which LLT this value.
will send contiguous out of
timer context heartbeats.
oos If the out-of-sequence timer 10 Do not change this value for Not applicable
has expired for a node, LLT performance reasons.
sends an appropriate NAK to Lowering the value can result
that node. LLT does not send in unnecessary
a NAK as soon as it receives retransmissions/negative
an oos packet. It waits for the acknowledgement traffic.
oos timer value before
You can increase the value
sending the NAK.
of oos if the round trip time is
large in the cluster (for
example, campus cluster).
VCS performance considerations 565
About VCS tunable parameters
retrans LLT retransmits a packet if it 10 Do not change this value. Not applicable
does not receive its Lowering the value can result
acknowledgement for this in unnecessary
timer interval value. retransmissions.
service LLT calls its service routine 100 Do not change this value for Not applicable
(which delivers messages to performance reasons.
LLT clients) after every
service timer interval.
arp LLT flushes stored address 0 This feature is disabled by Not applicable
of peer nodes when this timer default.
expires and relearns the
addresses.
arpreq LLT sends an arp request 3000 Do not change this value for Not applicable
when this timer expires to performance reasons.
detect other peer nodes in the
cluster.
highwater When the number of packets 200 If a client generates data in This flow control value should
in transmit queue for a node bursty manner, increase this always be higher than the
reaches highwater, LLT is value to match the incoming lowwater flow control value.
flow controlled. data rate. Note that
increasing the value means
more memory consumption
so set an appropriate value
to avoid wasting memory
unnecessarily.
lowwater When LLT has flow controlled 100 Veritas does not recommend This flow control value should
the client, it will not start to change this tunable. be lower than the highwater
accepting packets again till flow control value. The value
the number of packets in the should not be close the
port transmit queue for a highwater flow control value.
node drops to lowwater.
rporthighwater When the number of packets 200 If a client generates data in This flow control value should
in the receive queue for a port bursty manner, increase this always be higher than the
reaches highwater, LLT is value to match the incoming rportlowwater flow control
flow controlled. data rate. Note that value.
increasing the value means
more memory consumption
so set an appropriate value
to avoid wasting memory
unnecessarily.
rportlowwater When LLT has flow controlled 100 Veritas does not recommend This flow control value should
the client on peer node, it will to change this tunable. be lower than the
not start accepting packets rpothighwater flow control
for that client again till the value. The value should not
number of packets in the port be close the rporthighwater
receive queue for the port flow control value.
drops to rportlowwater.
VCS performance considerations 567
About VCS tunable parameters
window This is the maximum number 50 Change the value as per the This flow control value should
of un-ACKed packets LLT will private networks speed. not be higher than the
put in flight. Lowering the value difference between the
irrespective of network speed highwater flow control value
may result in unnecessary and the lowwater flow control
retransmission of out of value.
window sequence packets.
The value of this parameter
(window) should be aligned
with the value of the
bandwidth delay product.
linkburst It represents the number of 32 For performance reasons, its This flow control value should
back-to-back packets that value should be either 0 or at not be higher than the
LLT sends on a link before least 32. difference between the
the next link is chosen. highwater flow control value
and the lowwater flow control
value.
ackval LLT sends acknowledgement 10 Do not change this value for Not applicable
of a packet by piggybacking performance reasons.
an ACK packet on the next Increasing the value can
outbound data packet to the result in unnecessary
sender node. If there are no retransmissions.
data packets on which to
piggyback the ACK packet,
LLT waits for ackval number
of packets before sending an
explicit ACK to the sender.
sws To avoid Silly Window 40 For performance reason, its Its value should be lower than
Syndrome, LLT transmits value should be changed that of window. Its value
more packets only when the whenever the value of the should be close to the value
count of un-acked packet window tunable is changed of window tunable.
goes to below of this tunable as per the formula given
value. below: sws = window *4/5.
largepktlen When LLT has packets to 1024 Veritas does not recommend Not applicable
delivers to multiple ports, LLT to change this tunable.
delivers one large packet or
up to five small packets to a
port at a time. This parameter
specifies the size of the large
packet.
VCS performance considerations 568
About VCS tunable parameters
# lltconfig -T query
# lltconfig -F query
Range: 64
Range: 1-32
Range:
8100-65400
Range: 2-60
VCS performance considerations 571
About VCS tunable parameters
gabconfig -c -n4
After adding the option, the /etc/gabtab file looks similar to the following:
gabconfig -c -n4 -k
Table 20-7 describes the GAB dynamic tunable parameters as seen with the
gabconfig -l command, and specifies the command to modify them.
Control port seed This option defines the minimum number of nodes that can form the
cluster. This option controls the forming of the cluster. If the number of
nodes in the cluster is less than the number specified in the gabtab
file, then the cluster will not form. For example: if you type gabconfig
-c -n4, then the cluster will not form until all four nodes join the cluster.
If this option is enabled using the gabconfig -x command then the
node will join the cluster even if the other nodes in the cluster are not
yet part of the membership.
Use the following command to set the number of nodes that can form
the cluster:
gabconfig -n count
Use the following command to enable control port seed. Node can form
the cluster without waiting for other nodes for membership:
gabconfig -x
VCS performance considerations 572
About VCS tunable parameters
gabconfig -p
gabconfig -P
This GAB option controls whether GAB can panic the node or not when
the VCS engine or the vxconfigd daemon miss to heartbeat with GAB.
If the VCS engine experiences a hang and is unable to heartbeat with
GAB, then GAB will NOT PANIC the system immediately. GAB will first
try to abort the process by sending SIGABRT (kill_ntries - default value
5 times) times after an interval of "iofence_timeout" (default value 15
seconds). If this fails, then GAB will wait for the "isolate timeout" period
which is controlled by a global tunable called isolate_time (default value
2 minutes). If the process is still alive, then GAB will PANIC the system.
If this option is enabled GAB will immediately HALT the system in case
of missed heartbeat from client.
gabconfig -b
gabconfig -B
VCS performance considerations 573
About VCS tunable parameters
This option allows the user to configure the behavior of the VCS engine
or any other user process when one or more nodes rejoin a cluster after
a network partition. By default GAB will not PANIC the node running
the VCS engine. GAB kills the userland process (the VCS engine or
the vxconfigd process). This recycles the user port (port h in case of
the VCS engine) and clears up messages with the old generation
number programmatically. Restart of the process, if required, must be
handled outside of GAB control, e.g., for hashadow process restarts
_had.
When GAB has kernel clients (such as fencing, VxVM, or VxFS), then
the node will always PANIC when it rejoins the cluster after a network
partition. The PANIC is mandatory since this is the only way GAB can
clear ports and remove old messages.
gabconfig -j
gabconfig -J
gabconfig -k
VCS performance considerations 574
About VCS tunable parameters
gabconfig -q
gabconfig -d
GAB queue limit option controls the number of pending message before
which GAB sets flow. Send queue limit controls the number of pending
message in GAB send queue. Once GAB reaches this limit it will set
flow control for the sender process of the GAB client. GAB receive
queue limit controls the number of pending message in GAB receive
queue before GAB send flow control for the receive side.
gabconfig -Q sendq:value
gabconfig -Q recvq:value
This parameter specifies the timeout (in milliseconds) for which GAB
will wait for the clients to respond to an IOFENCE message before
taking next action. Based on the value of kill_ntries , GAB will attempt
to kill client process by sending SIGABRT signal. If the client process
is still registered after GAB attempted to kill client process for the value
of kill_ntries times, GAB will halt the system after waiting for additional
isolate_timeout value.
gabconfig -f value
VCS performance considerations 575
About VCS tunable parameters
Specifies the time GAB waits to reconfigure membership after the last
report from LLT of a change in the state of local node connections for
a given port. Any change in the state of connections will restart GAB
waiting period.
gabconfig -t stable
This tunable specifies the timeout value for which GAB will wait for
client process to unregister in response to GAB sending SIGKILL signal.
If the process still exists after isolate timeout GAB will halt the system
gabconfig -S isolate_time:value
Kill_ntries Default: 5
This tunable specifies the number of attempts GAB will make to kill the
process by sending SIGABRT signal.
gabconfig -S kill_ntries:value
Driver state This parameter shows whether GAB is configured. GAB may not have
seeded and formed any membership yet.
Partition arbitration This parameter shows whether GAB is asked to specifically ignore
jeopardy.
See the gabconfig (1M) manual page for details on the -s flag.
■ Values
Default: 524288(512KB)
Minimum: 65536(64KB)
Maximum: 1048576(1MB)
■ Values
Default: 60
Minimum: 1
Maximum: 600
■ Values
Default: 1
Minimum: 1
Maximum: 600
vxfen_vxfnd_tmt Specifies the time in seconds that the I/O fencing driver VxFEN
waits for the I/O fencing daemon VXFEND to return after
completing a given task.
■ Values
Default: 60
Minimum: 10
Maximum: 600
VCS performance considerations 577
About VCS tunable parameters
panic_timeout_offst Specifies the time in seconds based on which the I/O fencing
driver VxFEN computes the delay to pass to the GAB module to
wait until fencing completes its arbitration before GAB implements
its decision in the event of a split-brain. You can set this parameter
in the vxfenmode file and use the vxfenadm command to check
the value. Depending on the vxfen_mode, the GAB delay is
calculated as follows:
In the event of a network partition, the smaller sub-cluster delays before racing for
the coordinator disks. The time delay allows a larger sub-cluster to win the race for
the coordinator disks. The vxfen_max_delay and vxfen_min_delay parameters
define the delay in seconds.
Note: You must restart the VXFEN module to put any parameter change into effect.
VCS performance considerations 578
About VCS tunable parameters
# /etc/methods/vxfenext -status
# /etc/init.d/vxfen.rc start
# /etc/init.d/vxfen.rc stop
5 Ensure that VCS is shut down. Unload and reload the driver after changing
the value of the tunable:
# /etc/methods/vxfenext -stop
# /etc/methods/vxfenext -start
6 Repeat the above steps on each cluster node to change the parameter.
7 Start VCS.
# hastart
# amfconfig -T tunable_name=tunable_value,
tunable_name=tunable_value...
Table 20-9 lists the possible tunable parameters for the AMF kernel:
The parameter values that you update are reflected after you reconfigure AMF
driver. Note that if you unload the module, the updated values are lost. You must
unconfigure the module using the amfconfig -U or equivalent command and then
reconfigure using the amfconfig -c command for the updated tunables to be
VCS performance considerations 580
About VCS tunable parameters
effective. If you want to set the tunables at module load time, you can write these
amfconfig commands in the amftab file.
■ Troubleshooting resources
■ Troubleshooting sites
■ Troubleshooting notification
■ Troubleshooting licensing
Note that the logs on all nodes may not be identical because
■ VCS logs local events on the local nodes.
Troubleshooting and recovery for VCS 583
VCS message logging
Note: Irrespective of the value of LogViaHalog, the script entry point’s logs that are
executed in the container will go into the engine log file.
From VCS 6.2 version, the capturing of the FFDC information on unexpected events
has been extended to resource level and to cover VCS events. This means, if a
resource faces an unexpected behavior then FFDC information will be generated.
The current version enables the agent to log detailed debug logging functions during
unexpected events with respect to resource, such as,
■ Monitor entry point of a resource reported OFFLINE/INTENTIONAL OFFLINE
when it was in ONLINE state.
■ Monitor entry point of a resource reported UNKNOWN.
■ If any entry point times-out.
■ If any entry point reports failure.
■ When a resource is detected as ONLINE or OFFLINE for the first time.
Now whenever an unexpected event occurs FFDC information will be automatically
generated. And this information will be logged in their respective agent log file.
/opt/VRTSgab/gabread_ffdc binary_logs_files_location
You can change the values of the following environment variables that control the
GAB binary log files:
■ GAB_FFDC_MAX_INDX: Defines the maximum number of GAB binary log files
The GAB logging daemon collects the defined number of log files each of eight
MB size. The default value is 20, and the files are named gablog.1 through
gablog.20. At any point in time, the most recent file is the gablog.1 file.
■ GAB_FFDC_LOGDIR: Defines the log directory location for GAB binary log files
The default location is:
Troubleshooting and recovery for VCS 585
VCS message logging
/var/adm/gab_ffdc
Note that the gablog daemon writes its log to the glgd_A.log and glgd_B.log
files in the same directory.
You can either define these variables in the following GAB startup file or use the
export command. You must restart GAB for the changes to take effect.
/etc/default/gab
# haconf -makerw
2 Enable logging and set the desired log levels. Use the following command
syntax:
The following example shows the command line for the IPMultiNIC resource
type.
If DBG_AGDEBUG is set, the agent framework logs for an instance of the agent
appear in the agent log on the node on which the agent is running.
Troubleshooting and recovery for VCS 586
VCS message logging
4 For CVMvxconfigd agent, you do not have to enable any additional debug logs.
5 For AMF driver in-memory trace buffer:
If you had enabled AMF driver in-memory trace buffer, you can view the
additional logs using the amfconfig -p dbglog command.
# export VCS_DEBUG_LOG_TAGS="DBG_IPM"
# hagrp -list
Note: Debug log messages are verbose. If you enable debug logs, log files might
fill up quickly.
DBG_AGDEBUG
DBG_AGINFO
DBG_HBFW_DEBUG
DBG_HBFW_INFO
Troubleshooting and recovery for VCS 588
VCS message logging
# /opt/VRTSvcs/bin/hagetcf
The command prompts you to specify an output directory for the gzip file. You
may save the gzip file to either the default/tmp directory or a different directory.
Troubleshoot and fix the issue.
See “Troubleshooting the VCS engine” on page 593.
See “Troubleshooting VCS startup” on page 600.
See “Troubleshooting service groups” on page 604.
See “Troubleshooting resources” on page 610.
See “Troubleshooting notification” on page 626.
See “Troubleshooting and recovery for global clusters” on page 626.
See “Troubleshooting the steward process” on page 630.
If the issue cannot be fixed, then contact Veritas technical support with the file
that the hagetcf command generates.
# /opt/VRTSvcs/bin/vcsstatlog
--dump /var/VRTSvcs/stats/copied_vcs_host_stats
■ To get the forecasted available capacity for CPU, Mem, and Swap for a
system in cluster, run the following command on the system on which you
copied the statlog database:
# /opt/VRTSgab/getcomms
The script uses rsh by default. Make sure that you have configured
passwordless rsh. If you have passwordless ssh between the cluster nodes,
you can use the -ssh option. To gather information on the node that you run
the command, use the -local option.
Troubleshoot and fix the issue.
See “Troubleshooting Low Latency Transport (LLT)” on page 595.
See “Troubleshooting Group Membership Services/Atomic Broadcast (GAB)”
on page 598.
If the issue cannot be fixed, then contact Veritas technical support with the file
/tmp/commslog.<time_stamp>.tar that the getcomms script generates.
# /opt/VRTSamf/bin/getimf
Message catalogs
VCS includes multilingual support for message catalogs. These binary message
catalogs (BMCs), are stored in the following default locations. The variable language
represents a two-letter abbreviation.
/opt/VRTS/messages/language/module_name
Troubleshooting and recovery for VCS 592
VCS message logging
HAD diagnostics
When the VCS engine HAD dumps core, the core is written to the directory
$VCS_DIAG/diag/had. The default value for variable $VCS_DIAG is /var/VRTSvcs/.
When HAD core dumps, review the contents of the $VCS_DIAG/diag/had directory.
See the following logs for more information:
■ Operating system console log
■ Engine log
■ hashadow log
VCS runs the script /opt/VRTSvcs/bin/vcs_diag to collect diagnostic information
when HAD and GAB encounter heartbeat problems. The diagnostic information is
stored in the $VCS_DIAG/diag/had directory.
When HAD starts, it renames the directory to had.timestamp, where timestamp
represents the time at which the directory was renamed.
Preonline IP check
You can enable a preonline check of a failover IP address to protect against network
partitioning. The check pings a service group's configured IP address to verify that
it is not already in use. If it is, the service group is not brought online.
A second check verifies that the system is connected to its public network and
private network. If the system receives no response from a broadcast ping to the
public network and a check of the private networks, it determines the system is
isolated and does not bring the service group online.
To enable the preonline IP check, do one of the following:
■ If preonline trigger script is not already present, copy the preonline trigger script
from the sample triggers directory into the triggers directory:
# cp /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/preonline_ipc
/opt/VRTSvcs/bin/triggers/preonline
Troubleshooting and recovery for VCS 595
Troubleshooting Low Latency Transport (LLT)
Recommended action: Ensure that all systems on the network have unique
clusterid-nodeid pair. You can use the lltdump -f device -D command to get
the list of unique clusterid-nodeid pairs connected to the network. This utility is
available only for LLT-over-ethernet.
LLT INFO V-14-1-10205 This message implies that LLT did not receive any heartbeats
link 1 (link_name) node 1 in trouble on the indicated link from the indicated peer node for LLT
peertrouble time. The default LLT peertrouble time is 2s for
hipri links and 4s for lo-pri links.
Recommended action: If these messages sporadically appear
in the syslog, you can ignore them. If these messages flood
the syslog, then perform one of the following:
lltconfig -T peertrouble:<value>
for hipri link
lltconfig -T peertroublelo:<value>
for lopri links.
LLT INFO V-14-1-10024 This message implies that LLT started seeing heartbeats on
link 0 (link_name) node 1 active this link from that node.
LLT INFO V-14-1-10032 This message implies that LLT did not receive any heartbeats
link 1 (link_name) node 1 inactive 5 on the indicated link from the indicated peer node for the
sec (510) indicated amount of time.
If the peer node has not actually gone down, check for the
following:
LLT INFO V-14-1-10510 sent hbreq This message implies that LLT did not receive any heartbeats
(NULL) on link 1 (link_name) node 1. on the indicated link from the indicated peer node for more
4 more to go. than LLT peerinact time. LLT attempts to request heartbeats
LLT INFO V-14-1-10510 sent hbreq (sends 5 hbreqs to the peer node) and if the peer node does
(NULL) on link 1 (link_name) node 1. not respond, LLT marks this link as “expired” for that peer
3 more to go. node.
LLT INFO V-14-1-10510 sent hbreq
Recommended action: If the peer node has not actually gone
(NULL) on link 1 (link_name) node 1.
down, check for the following:
2 more to go.
LLT INFO V-14-1-10032 link 1 ■ Check if the link has got physically disconnected from the
(link_name) node 1 inactive 6 sec system or switch.
(510) ■ Check for the link health and replace the link if necessary.
LLT INFO V-14-1-10510 sent hbreq
See “Adding and removing LLT links” on page 95.
(NULL) on link 1 (link_name) node 1.
1 more to go.
LLT INFO V-14-1-10510 sent hbreq
(NULL) on link 1 (link_name) node 1.
0 more to go.
LLT INFO V-14-1-10032 link 1
(link_name) node 1 inactive 7 sec
(510)
LLT INFO V-14-1-10509 link 1
(link_name) node 1 expired
LLT INFO V-14-1-10499 recvarpreq This message is logged when LLT learns the peer node’s
link 0 for node 1 addr change from address.
00:00:00:00:00:00 to
Recommended action: No action is required. This message
00:18:8B:E4:DE:27
is informational.
Troubleshooting and recovery for VCS 598
Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
On peer nodes:
Recommended Action: If this issue occurs rarely at times of high load, and does
not cause any detrimental effects, the issue is benign. If the issue recurs, or if the
issue causes repeated VCS daemon restarts or node panics, you must increase
the priority of the thread (gab_timer_pri tunable parameter).
Troubleshooting and recovery for VCS 599
Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
Recommended Action: If this issue occurs during a GAB reconfiguration, and does
not recur, the issue is benign. If the issue persists, collect commslog from each
node, and contact Veritas support.
GABs attempt (five retries) to kill the VCS daemon fails if VCS daemon is stuck in
the kernel in an uninterruptible state or the system is heavily loaded that the VCS
daemon cannot die with a SIGKILL.
Recommended Action:
■ In case of performance issues, increase the value of the VCS_GAB_TIMEOUT
environment variable to allow VCS more time to heartbeat.
See “ VCS environment variables” on page 70.
■ In case of a kernel problem, configure GAB to not panic but continue to attempt
killing the VCS daemon.
Do the following:
■ Run the following command on each node:
gabconfig -k
Troubleshooting and recovery for VCS 600
Troubleshooting VCS startup
■ Add the “-k” option to the gabconfig command in the /etc/gabtab file:
gabconfig -c -k -n 6
■ In case the problem persists, collect sar or similar output, collect crash dumps,
run the Veritas Operations and Readiness Tools (SORT) data collector on all
nodes, and contact Veritas Technical Support.
# cd /etc/VRTSvcs/conf/config
# hacf -verify .
# cd /etc/VRTSvcs/conf/config
# hacf -verify .
Troubleshooting and recovery for VCS 601
Troubleshooting Intelligent Monitoring Framework (IMF)
GAB can become unregistered if LLT is set up incorrectly. Verify that the
configuration is correct in /etc/llttab. If the LLT configuration is incorrect, make the
appropriate changes and reboot.
Intelligent resource If the system is busy even after intelligent resource monitoring is enabled, troubleshoot as
monitoring has not follows:
reduced system
■ Check the agent log file to see whether the imf_init agent function has failed.
utilization
If the imf_init agent function has failed, then do the following:
■ Make sure that the AMF_START environment variable value is set to 1.
See “Environment variables to start and stop VCS modules” on page 75.
■ Make sure that the AMF module is loaded.
See “Administering the AMF kernel driver” on page 99.
■ Make sure that the IMF attribute values are set correctly for the following attribute keys:
■ The value of the Mode key of the IMF attribute must be set to 1, 2, or 3.
■ The value of the MonitorFreq key of the IMF attribute must be be set to either 0 or a
value greater than 0.
For example, the value of the MonitorFreq key can be set to 0 for the Process agent.
Refer to the appropriate agent documentation for configuration recommendations
corresponding to the IMF-aware agent.
Note that the IMF attribute can be overridden. So, if the attribute is set for individual
resource, then check the value for individual resource.
Enabling the agent's The actual intelligent monitoring for a resource starts only after a steady state is achieved.
intelligent monitoring So, it takes some time before you can see positive performance effect after you enable IMF.
does not provide This behavior is expected.
immediate performance
For more information on when a steady state is reached, see the following topic:
results
See “How intelligent resource monitoring works” on page 39.
Troubleshooting and recovery for VCS 603
Troubleshooting Intelligent Monitoring Framework (IMF)
Agent does not perform For the agents that use AMF driver for IMF notification, if intelligent resource monitoring has
intelligent monitoring not taken effect, do the following:
despite setting the IMF
■ Make sure that IMF attribute's Mode key value is set to three (3).
mode to 3
See “Resource type attributes” on page 664.
■ Review the the agent log to confirm that imf_init() agent registration with AMF has
succeeded. AMF driver must be loaded before the agent starts because the agent
registers with AMF at the agent startup time. If this was not the case, start the AMF
module and restart the agent.
See “Administering the AMF kernel driver” on page 99.
AMF module fails to Even after you change the value of the Mode key to zero, the agent still continues to have
unload despite changing a hold on the AMF driver until you kill the agent. To unload the AMF module, all holds on it
the IMF mode to 0 must get released.
If the AMF module fails to unload after changing the IMF mode value to zero, do the following:
■ Run the amfconfig -Uof command. This command forcefully removes all holds on
the module and unconfigures it.
■ Then, unload AMF.
See “Administering the AMF kernel driver” on page 99.
When you try to enable A few possible reasons for this behavior are as follows:
IMF for an agent, the
■ The agent might require some manual steps to make it IMF-aware. Refer to the agent
haimfconfig
documentation for these manual steps.
-enable -agent
■ The agent is a custom agent and is not IMF-aware. For information on how to make a
<agent_name>
custom agent IMF-aware, see the Cluster Server Agent Developer’s Guide.
command returns a
■ If the preceding steps do not resolve the issue, contact Veritas technical support.
message that IMF is
enabled for the agent.
However, when VCS
and the respective agent
is running, the
haimfconfig
-display command
shows the status for
agent_name as
DISABLED.
Troubleshooting and recovery for VCS 604
Troubleshooting service groups
Warning: To bring a group online manually after VCS has autodisabled the group,
make sure that the group is not fully or partially active on any system that has the
AutoDisabled attribute set to 1 by VCS. Specifically, verify that all resources that
may be corrupted by being active on multiple systems are brought down on the
designated systems. Then, clear the AutoDisabled attribute for each system: #
hagrp -autoenable service_group -sys system
engine and agent logs in /var/VRTSvcs/log for information on why the resource is
unable to be brought online or be taken offline.
To clear this state, make sure all resources waiting to go online/offline do not bring
themselves online/offline. Use the hagrp -flush command or the hagrp -flush
-force command to clear the internal state of VCS. You can then bring the service
group online or take it offline on another system.
For more information on the hagrp -flush and hagrp -flush -force commands.
Warning: Exercise caution when you use the -force option. It can lead to situations
where a resource status is unintentionally returned as FAULTED. In the time interval
that a resource transitions from ‘waiting to go offline’ to ‘not waiting’, if the agent
has not completed the offline agent function, the agent may return the state of the
resource as OFFLINE. VCS considers such unexpected offline of the resource as
FAULT and starts recovery action that was not intended.
Service group does not fail over to the BiggestAvailable system even
if FailOverPolicy is set to BiggestAvailable
Sometimes, a service group might not fail over to the biggest available system even
when FailOverPolicy is set to BiggestAvailable.
To troubleshoot this issue, check the engine log located in
/var/VRTSvcs/log/engine_A.log to find out the reasons for not failing over to the
biggest available system. This may be due to the following reasons:
■ If , one or more of the systems in the service group’s SystemList did not have
forecasted available capacity, you see the following message in the engine log:
One of the systems in SystemList of group group_name, system
system_name does not have forecasted available capacity updated
■ If the hautil –sys command does not list forecasted available capacity for the
systems, you see the following message in the engine log:
Failed to forecast due to insufficient data
This message is displayed due to insufficient recent data to be used for
forecasting the available capacity.
The default value for the MeterInterval key of the cluster attribute MeterControl
is 120 seconds. There will be enough recent data available for forecasting after
3 metering intervals (6 minutes) from time the VCS engine was started on the
system. After this, the forecasted values are updated every ForecastCycle *
MeterInterval seconds. The ForecastCycle and MeterInterval values are specified
in the cluster attribute MeterControl.
■ If one or more of the systems in the service group’s SystemList have stale
forecasted available capacity, you can see the following message in the engine
log:
System system_name has not updated forecasted available capacity since
last 2 forecast cycles
This issue is caused when the HostMonitor agent stops functioning. Check if
HostMonitor agent process is running by issuing one of the following commands
on the system which has stale forecasted values:
Troubleshooting and recovery for VCS 608
Troubleshooting service groups
■ # ps –aef|grep HostMonitor
Even if HostMonitor agent is running and you see the above message in the
engine log, it means that the HostMonitor agent is not able to forecast, and
it will log error messages in the HostMonitor_A.log file in the
/var/VRTSvcs/log/ directory.
# rm /var/VRTSvcs/stats/.vcs_host_stats.data \
/var/VRTSvcs/stats/.vcs_host_stats.index
# cp /var/VRTSvcs/stats/.vcs_host_stats_bkup.data \
/var/VRTSvcs/stats/.vcs_host_stats.data
# cp /var/VRTSvcs/stats/.vcs_host_stats_bkup.index \
/var/VRTSvcs/stats/.vcs_host_stats.index
Troubleshooting and recovery for VCS 609
Troubleshooting service groups
# /opt/VRTSvcs/bin/vcsstatlog --setprop \
/var/VRTSvcs/stats/.vcs_host_stats rate 120
# /opt/VRTSvcs/bin/vcsstatlog --setprop \
/var/VRTSvcs/stats/.vcs_host_stats compressto \
/var/VRTSvcs/stats/.vcs_host_stats_daily
# /opt/VRTSvcs/bin/vcsstatlog --setprop \
/var/VRTSvcs/stats/.vcs_host_stats compressmode avg
# /opt/VRTSvcs/bin/vcsstatlog --setprop \
/var/VRTSvcs/stats/.vcs_host_stats compressfreq 24h
# rm /var/VRTSvcs/stats/.vcs_host_stats.data \
/var/VRTSvcs/stats/.vcs_host_stats.index
VCS ERROR V-16-1-56034 ReservedCapacity for attribute dropped below zero for
system sys1 when unreserving for group service_group; setting to zero.
This error message appears only if the service group's load requirements change
during the transition period.
Recommended Action: No action is required. You can ignore this message.
Troubleshooting resources
This topic cites the most common problems associated with bringing resources
online and taking them offline. Bold text provides a description of the problem.
Recommended action is also included, where applicable.
The Monitor entry point of the disk group agent returns ONLINE even
if the disk group is disabled
This is expected agent behavior. VCS assumes that data is being read from or
written to the volumes and does not declare the resource as offline. This prevents
potential data corruption that could be caused by the disk group being imported on
two hosts.
You can deport a disabled disk group when all I/O operations are completed or
when all volumes are closed. You can then reimport the disk group to the same
system. Reimporting a disabled disk group may require a system reboot.
Troubleshooting and recovery for VCS 612
Troubleshooting sites
Note: A disk group is disabled if data including the kernel log, configuration copies,
or headers in the private region of a significant number of disks is invalid or
inaccessible. Volumes can perform read-write operations if no changes are required
to the private regions of the disks.
Troubleshooting sites
The following sections discuss troubleshooting the sites. Bold text provides a
description of the problem. Recommended action is also included, where applicable.
■ If Disk Group is not appearing in the server-> <Hostname> -> Disk Groups, then
right click on <Hostname> and run Rescan Disks.
■ For Campus cluster, tag all enclosures whose disks are part of Disk Groups
used for campus cluster setup
Renaming a Site
Recommended Action: For renaming the site, re-run the Stretch Cluster
Configuration flow by changing the Site names
If you see these messages when the new node is booting, the vxfen startup script
on the node makes up to five attempts to join the cluster.
To manually join the node to the cluster when I/O fencing attempts fail
◆ If the vxfen script fails in the attempts to allow the node to join the cluster,
restart vxfen driver with the command:
# /etc/init.d/vxfen.rc stop
# /etc/init.d/vxfen.rc start
The vxfentsthdw utility fails when SCSI TEST UNIT READY command
fails
While running the vxfentsthdw utility, you may see a message that resembles as
follows:
Troubleshooting and recovery for VCS 614
Troubleshooting I/O fencing
The disk array does not support returning success for a SCSI TEST UNIT READY
command when another host has the disk reserved using SCSI-3 persistent
reservations. This happens with the Hitachi Data Systems 99XX arrays if bit 186
of the system mode option is not enabled.
Note: If you want to clear all the pre-existing keys, use the vxfenclearpre utility.
See “About the vxfenclearpre utility” on page 279.
# vi /tmp/disklist
For example:
/dev/rhdisk74
3 If you know on which node the key (say A1) was created, log in to that node
and enter the following command:
# vxfenadm -m -k A2 -f /tmp/disklist
6 Remove the first key from the disk by preempting it with the second key:
/dev/rhdisk74
A node experiences the split-brain condition when it loses the heartbeat with its
peer nodes due to failure of all private interconnects or node hang. Review the
behavior of I/O fencing under different scenarios and the corrective measures to
be taken.
See “How I/O fencing works in different event scenarios” on page 251.
Troubleshooting and recovery for VCS 616
Troubleshooting I/O fencing
Cluster ID on the I/O fencing key of coordinator disk does not match
the local cluster’s ID
If you accidentally assign coordinator disks of a cluster to another cluster, then the
fencing driver displays an error message similar to the following when you start I/O
fencing:
The warning implies that the local cluster with the cluster ID 57069 has keys.
However, the disk also has keys for cluster with ID 48813 which indicates that nodes
from the cluster with cluster id 48813 potentially use the same coordinator disk.
You can run the following commands to verify whether these disks are used by
another cluster. Run the following commands on one of the nodes in the local
cluster. For example, on sys1:
sys1> # lltstat -C
57069
Where disk_7, disk_8, and disk_9 represent the disk names in your setup.
Recommended action: You must use a unique set of coordinator disks for each
cluster. If the other cluster does not use these coordinator disks, then clear the keys
using the vxfenclearpre command before you use them as coordinator disks in the
local cluster.
See “About the vxfenclearpre utility” on page 279.
Troubleshooting and recovery for VCS 617
Troubleshooting I/O fencing
However, the same error can occur when the private network links are working and
both systems go down, system 1 restarts, and system 2 fails to come back up. From
the view of the cluster from system 1, system 2 may still have the registrations on
the coordination points.
Assume the following situations to understand preexisting split-brain in server-based
fencing:
■ There are three CP servers acting as coordination points. One of the three CP
servers then becomes inaccessible. While in this state, one client node leaves
the cluster, whose registration cannot be removed from the inaccessible CP
server. When the inaccessible CP server restarts, it has a stale registration from
the node which left the VCS cluster. In this case, no new nodes can join the
cluster. Each node that attempts to join the cluster gets a list of registrations
Troubleshooting and recovery for VCS 618
Troubleshooting I/O fencing
from the CP server. One CP server includes an extra registration (of the node
which left earlier). This makes the joiner node conclude that there exists a
preexisting split-brain between the joiner node and the node which is represented
by the stale registration.
■ All the client nodes have crashed simultaneously, due to which fencing keys
are not cleared from the CP servers. Consequently, when the nodes restart, the
vxfen configuration fails reporting preexisting split brain.
These situations are similar to that of preexisting split-brain with coordinator disks,
where you can solve the problem running the vxfenclearpre command. A similar
solution is required in server-based fencing using the cpsadm command.
See “Clearing preexisting split-brain condition” on page 618.
Scenario Solution
Scenario Solution
2 Clear the keys on the coordinator disks as well as the data disks in all shared disk
groups using the vxfenclearpre command. The command removes SCSI-3
registrations and reservations.
4 Restart system 2.
2 Clear the keys on the coordinator disks as well as the data disks in all shared disk
groups using the vxfenclearpre command. The command removes SCSI-3
registrations and reservations.
After removing all stale registrations, the joiner node will be able to join the cluster.
4 Restart system 2.
# hastop -all
Make sure that the port h is closed on all the nodes. Run the following command
to verify that the port h is closed:
# gabconfig -a
# /etc/init.d/vxfen.rc stop
4 Import the coordinator disk group. The file /etc/vxfendg includes the name of
the disk group (typically, vxfencoorddg) that contains the coordinator disks, so
use the command:
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or
more disks is not accessible.
-C specifies that any import locks are removed.
5 To remove disks from the disk group, use the VxVM disk administrator utility,
vxdiskadm.
You may also destroy the existing coordinator disk group. For example:
■ Verify whether the coordinator attribute is set to on.
6 Add the new disk to the node and initialize it as a VxVM disk.
Then, add the new disk to the vxfencoorddg disk group:
■ If you destroyed the disk group in step 5, then create the disk group again
and add the new disk to it.
See the Cluster Server Installation Guide for detailed instructions.
■ If the diskgroup already exists, then add the new disk to it.
7 Test the recreated disk group for SCSI-3 persistent reservations compliance.
See “Testing the coordinator disk group using the -c option of vxfentsthdw”
on page 268.
8 After replacing disks in a coordinator disk group, deport the disk group:
# /etc/init.d/vxfen.rc start
10 Verify that the I/O fencing module has started and is enabled.
# gabconfig -a
Make sure that port b membership exists in the output for all nodes in the
cluster.
# vxfenadm -d
Make sure that I/O fencing mode is not disabled in the output.
11 If necessary, restart VCS on each node:
# hastart
The vxfenswap utility exits if rcp or scp commands are not functional
The vxfenswap utility displays an error message if rcp or scp commands are not
functional.
To recover the vxfenswap utility fault
◆ Verify whether the rcp or scp functions properly.
Make sure that you do not use echo or cat to print messages in the .bashrc
file for the nodes.
If the vxfenswap operation is unsuccessful, use the vxfenswap –a cancel
command if required to roll back any changes that the utility made.
See “About the vxfenswap utility” on page 282.
Troubleshooting CP server
All CP server operations and messages are logged in the /var/VRTScps/log directory
in a detailed and easy to read format. The entries are sorted by date and time. The
logs can be used for troubleshooting purposes or to review for any possible security
issue on the system that hosts the CP server.
The following files contain logs and text files that may be useful in understanding
and troubleshooting a CP server:
■ /var/VRTScps/log/cpserver_[ABC].log
■ /var/VRTSvcs/log/vcsauthserver.log (Security related)
Troubleshooting and recovery for VCS 623
Troubleshooting I/O fencing
■ If the vxcpserv process fails on the CP server, then review the following
diagnostic files:
■ /var/VRTScps/diag/FFDC_CPS_pid_vxcpserv.log
■ /var/VRTScps/diag/stack_pid_vxcpserv.txt
Note: If the vxcpserv process fails on the CP server, these files are present in
addition to a core file. VCS restarts vxcpserv process automatically in such
situations.
cpsadm command on If you receive a connection error message after issuing the cpsadm command on the VCS
the VCS cluster gives cluster, perform the following actions:
connection error
■ Ensure that the CP server is reachable from all the VCS cluster nodes.
■ Check the /etc/vxfenmode file and ensure that the VCS cluster nodes use the correct
CP server virtual IP or virtual hostname and the correct port number.
■ For HTTPS communication, ensure that the virtual IP and ports listed for the server can
listen to HTTPS requests.
Authorization failure Authorization failure occurs when the nodes on the client clusters and or users are not added
in the CP server configuration. Therefore, fencing on the VCS cluster (client cluster) node
is not allowed to access the CP server and register itself on the CP server. Fencing fails to
come up if it fails to register with a majority of the coordination points.
To resolve this issue, add the client cluster node and user in the CP server configuration
and restart fencing.
Troubleshooting and recovery for VCS 625
Troubleshooting I/O fencing
Table 21-4 Fencing startup issues on VCS cluster (client cluster) nodes
(continued)
Authentication failure If you had configured secure communication between the CP server and the VCS cluster
(client cluster) nodes, authentication failure can occur due to the following causes:
■ The client cluster requires its own private key, a signed certificate, and a Certification
Authority's (CA) certificate to establish secure communication with the CP server. If any
of the files are missing or corrupt, communication fails.
■ If the client cluster certificate does not correspond to the client's private key,
communication fails.
■ If the CP server and client cluster do not have a common CA in their certificate chain of
trust, then communication fails.
See “About secure communication between the VCS cluster and CP server” on page 241.
Thus, during vxfenswap, when the vxfenmode file is being changed by the user,
the Coordination Point agent does not move to FAULTED state but continues
monitoring the old set of coordination points.
As long as the changes to vxfenmode file are not committed or the new set of
coordination points are not reflected in vxfenconfig -l output, the Coordination
Point agent continues monitoring the old set of coordination points it read from
vxfenconfig -l output in every monitor cycle.
The status of the Coordination Point agent (either ONLINE or FAULTED) depends
upon the accessibility of the coordination points, the registrations on these
coordination points, and the fault tolerance value.
When the changes to vxfenmode file are committed and reflected in the vxfenconfig
-l output, then the Coordination Point agent reads the new set of coordination
points and proceeds to monitor them in its new monitor cycle.
Troubleshooting notification
Occasionally you may encounter problems when using VCS notification. This section
cites the most common problems and the recommended actions. Bold text provides
a description of the problem.
Disaster declaration
When a cluster in a global cluster transitions to the FAULTED state because it can
no longer be contacted, failover executions depend on whether the cause was due
to a split-brain, temporary outage, or a permanent disaster at the remote cluster.
If you choose to take action on the failure of a cluster in a global cluster, VCS
prompts you to declare the type of failure.
■ Disaster, implying permanent loss of the primary data center
■ Outage, implying the primary may return to its current form in some time
■ Disconnect, implying a split-brain condition; both clusters are up, but the link
between them is broken
■ Replica, implying that data on the takeover target has been made consistent
from a backup source and that the RVGPrimary can initiate a takeover when
the service group is brought online. This option applies to VVR environments
only.
You can select the groups to be failed over to the local cluster, in which case VCS
brings the selected groups online on a node based on the group's FailOverPolicy
attribute. It also marks the groups as being offline in the other cluster. If you do not
select any service groups to fail over, VCS takes no action except implicitly marking
the service groups as offline on the downed cluster.
VCS alerts
VCS alerts are identified by the alert ID, which is comprised of the following
elements:
■ alert_type—The type of the alert
■ object—The name of the VCS object for which this alert was generated. This
could be a cluster or a service group.
Alerts are generated in the following format:
alert_type-cluster-system-object
For example:
GNOFAILA-Cluster1-oracle_grp
This is an alert of type GNOFAILA generated on cluster Cluster1 for the service
group oracle_grp.
Types of alerts
VCS generates the following types of alerts.
■ CFAULT—Indicates that a cluster has faulted
■ GNOFAILA—Indicates that a global group is unable to fail over within the cluster
where it was online. This alert is displayed if the ClusterFailOverPolicy attribute
is set to Manual and the wide-area connector (wac) is properly configured and
running at the time of the fault.
■ GNOFAIL—Indicates that a global group is unable to fail over to any system
within the cluster or in a remote cluster.
Some reasons why a global group may not be able to fail over to a remote
cluster:
■ The ClusterFailOverPolicy is set to either Auto or Connected and VCS is
unable to determine a valid remote cluster to which to automatically fail the
group over.
■ The ClusterFailOverPolicy attribute is set to Connected and the cluster in
which the group has faulted cannot communicate with one ore more remote
clusters in the group's ClusterList.
■ The wide-area connector (wac) is not online or is incorrectly configured in
the cluster in which the group has faulted
Troubleshooting and recovery for VCS 629
Troubleshooting and recovery for global clusters
Managing alerts
Alerts require user intervention. You can respond to an alert in the following ways:
■ If the reason for the alert can be ignored, use the Alerts dialog box in the Java
console or the haalert command to delete the alert. You must provide a
comment as to why you are deleting the alert; VCS logs the comment to engine
log.
■ Take an action on administrative alerts that have actions associated with them.
■ VCS deletes or negates some alerts when a negating event for the alert occurs.
An administrative alert will continue to live if none of the above actions are performed
and the VCS engine (HAD) is running on at least one node in the cluster. If HAD
is not running on any node in the cluster, the administrative alert is lost.
Negating events
VCS deletes a CFAULT alert when the faulted cluster goes back to the running
state
VCS deletes the GNOFAILA and GNOFAIL alerts in response to the following
events:
■ The faulted group's state changes from FAULTED to ONLINE.
■ The group's fault is cleared.
■ The group is deleted from the cluster where alert was generated.
Recommended Action: Verify the state of the service group in each cluster before
making the service group global.
Troubleshooting licensing
This section cites problems you may encounter with VCS licensing. It provides
instructions on how to validate license keys and lists the error messages associated
with licensing.
Troubleshooting and recovery for VCS 631
Troubleshooting licensing
This command stops the cluster without affecting the virtual disks attached to the
VCS cluster nodes.
Troubleshooting and recovery for VCS 634
Troubleshooting issues with the Veritas High Availability view
■ Verify that ports 14152, 14153, and 5634 are not blocked by a firewall.
■ Log out of the vSphere Client and then login again. Then, verify that the Veritas
High Availability plugin is installed and enabled.
■ space
■ underscore
Use the following command to reset the display name:
hagrp -modify sg name UserAssoc -update Name "modified display
name without special characters"
In the Veritas High Availability tab, the Add Failover System link is
dimmed
If the system that you clicked in the inventory view of the vSphere Client GUI to
launch the Veritas High Availability tab is not part of the list of failover target systems
for that application, the Add Failover System link is dimmed. (2932281)
Workaround: In the vSphere Client GUI inventory view, to launch the Veritas High
Availability tab, click a system from the existing list of failover target systems for
the application. The Add Failover System link that appears in the drop-down list if
you click More, is no longer dimmed.
Section 8
Appendixes
■ Administration matrices
Administration matrices
Review the matrices in the following topics to determine which command options
can be executed within a specific user role. Checkmarks denote the command and
option can be executed. A dash indicates they cannot.
VCS user privileges—administration matrices 638
Administration matrices
Start agent – – – ✓ ✓
Stop agent – – – ✓ ✓
Display info ✓ ✓ ✓ ✓ ✓
List agents ✓ ✓ ✓ ✓ ✓
Add – – – – ✓
Change default – – – – ✓
value
Delete – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
Display ✓ ✓ ✓ ✓ ✓
Modify – – – – ✓
Add – – – – ✓
VCS user privileges—administration matrices 639
Administration matrices
Delete – – – – ✓
Declare – – – ✓ ✓
View state or ✓ ✓ ✓ ✓ ✓
status
Update license – – – – ✓
Make configuration – – ✓ – ✓
read-write
Save configuration – – ✓ – ✓
Make configuration – – ✓ – ✓
read-only
Clear – ✓ ✓ ✓ ✓
Bring online – ✓ ✓ ✓ ✓
Take offline – ✓ ✓ ✓ ✓
View state ✓ ✓ ✓ ✓ ✓
Switch – ✓ ✓ ✓ ✓
Freeze/unfreeze – ✓ ✓ ✓ ✓
Freeze/unfreeze – – ✓ – ✓
persistent
Enable – – ✓ – ✓
VCS user privileges—administration matrices 640
Administration matrices
Disable – – ✓ – ✓
Modify – – ✓ – ✓
Display ✓ ✓ ✓ ✓ ✓
View ✓ ✓ ✓ ✓ ✓
dependencies
View resources ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Enable resources – – ✓ – ✓
Disable resources – – ✓ – ✓
Flush – ✓ ✓ ✓ ✓
Autoenable – ✓ ✓ ✓ ✓
Ignore – ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
Make local – – – – ✓
Make global – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
View state ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
VCS user privileges—administration matrices 641
Administration matrices
Add messages to – – ✓ – ✓
log file
Display ✓ ✓ ✓ ✓ ✓
Add – – ✓ – ✓
Delete – – ✓ – ✓
Make attribute – – ✓ – ✓
local
Make attribute – – ✓ – ✓
global
Clear – ✓ ✓ ✓ ✓
Bring online – ✓ ✓ ✓ ✓
Take offline – ✓ ✓ ✓ ✓
Modify – – ✓ – ✓
View state ✓ ✓ ✓ ✓ ✓
VCS user privileges—administration matrices 642
Administration matrices
Display ✓ ✓ ✓ ✓ ✓
View ✓ ✓ ✓ ✓ ✓
dependencies
List, Value ✓ ✓ ✓ ✓ ✓
Probe – ✓ ✓ ✓ ✓
Override attribute – – ✓ – ✓
Remove overrides – – ✓ – ✓
Run an action – ✓ ✓ ✓ ✓
Refresh info – ✓ ✓ ✓ ✓
Flush info – ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
Freeze and – – – ✓ ✓
unfreeze
Freeze and – – – – ✓
unfreeze persistent
Freeze and – – – – ✓
evacuate
Display ✓ ✓ ✓ ✓ ✓
Start forcibly – – – – ✓
Modify – – – – ✓
VCS user privileges—administration matrices 643
Administration matrices
View state ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Update license – – – – ✓
Add – – – – ✓
Delete – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
View resources ✓ ✓ ✓ ✓ ✓
Modify – – – – ✓
List ✓ ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
VCS user privileges—administration matrices 644
Administration matrices
Update ✓ ✓ ✓ ✓ ✓
Note: If Note: If Note: If
configuration configuration configuration
is read/write is read/write is read/write
Display ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Modify privileges – – ✓ – ✓
Appendix B
VCS commands: Quick
reference
This appendix includes the following topics:
Table B-2 lists the VCS commands for service group, resource, and site operations.
Table B-2 VCS commands for service group, resource, and site operations
Table B-2 VCS commands for service group, resource, and site operations
(continued)
Table B-3 lists the VCS commands for status and verification.
lltconfig -a list
lltstat
lltstat -nvv
gabconfig -av
lltconfig -U
gabconfig -U
■ System states
State Definition
INIT The initial state of the cluster. This is the default state.
Cluster and system states 650
Remote cluster states
State Definition
BUILD The local cluster is receiving the initial snapshot from the remote cluster.
RUNNING Indicates the remote cluster is running and connected to the local
cluster.
LOST_HB The connector process on the local cluster is not receiving heartbeats
from the remote cluster
LOST_CONN The connector process on the local cluster has lost the TCP/IP
connection to the remote cluster.
UNKNOWN The connector process on the local cluster determines the remote
cluster is down, but another remote cluster sends a response indicating
otherwise.
INQUIRY The connector process on the local cluster is querying other clusters
on which heartbeats were lost.
TRANSITIONING The connector process on the remote cluster is failing over to another
node in the cluster.
System states
Whenever the VCS engine is running on a system, it is in one of the states described
in the table below. States indicate a system’s current mode of operation. When the
engine is started on a new system, it identifies the other systems available in the
cluster and their states of operation. If a cluster system is in the state of RUNNING,
the new system retrieves the configuration information from that system. Changes
made to the configuration while it is being retrieved are applied to the new system
before it enters the RUNNING state.
If no other systems are up and in the state of RUNNING or ADMIN_WAIT, and the
new system has a configuration that is not invalid, the engine transitions to the state
LOCAL_BUILD, and builds the configuration from disk. If the configuration is invalid,
the system transitions to the state of STALE_ADMIN_WAIT.
See “Examples of system state transitions” on page 653.
Table C-2 provides a list of VCS system states and their descriptions.
State Definition
ADMIN_WAIT The running configuration was lost. A system transitions into this state
for the following reasons:
CURRENT_ The system has joined the cluster and its configuration file is valid.
DISCOVER_WAIT The system is waiting for information from other systems before it
determines how to transition to another state.
Cluster and system states 652
System states
State Definition
CURRENT_PEER_ The system has a valid configuration file and another system is doing
WAIT a build from disk (LOCAL_BUILD). When its peer finishes the build,
this system transitions to the state REMOTE_BUILD.
EXITING_FORCIBLY An hastop -force command has forced the system to leave the
cluster.
INITING The system has joined the cluster. This is the initial state for all
systems.
LEAVING The system is leaving the cluster gracefully. When the agents have
been stopped, and when the current configuration is written to disk,
the system transitions to EXITING.
LOCAL_BUILD The system is building the running configuration from the disk
configuration.
STALE_ADMIN_WAIT The system has an invalid configuration and there is no other system
in the state of RUNNING from which to retrieve a configuration. If a
system with a valid configuration is started, that system enters the
LOCAL_BUILD state.
STALE_ The system has joined the cluster with an invalid configuration file. It
DISCOVER_WAIT is waiting for information from any of its peers before determining how
to transition to another state.
STALE_PEER_WAIT The system has an invalid configuration file and another system is
doing a build from disk (LOCAL_BUILD). When its peer finishes the
build, this system transitions to the state REMOTE_BUILD.
UNKNOWN The system has not joined the cluster because it does not have a
system entry in the configuration.
Cluster and system states 653
System states
■ Resource attributes
■ System attributes
■ Cluster attributes
■ Site attributes
The values of attributes labelled system use only are set by VCS and are read-only.
They contain important information about the state of the cluster.
The values labeled agent-defined are set by the corresponding agent and are also
read-only.
Attribute values are case-sensitive.
See “About VCS attributes” on page 66.
Resource attributes
Table D-1 lists resource attributes.
Resource Description
attributes
ArgListValues List of arguments passed to the resource’s agent on each system. This attribute is
resource-specific and system-specific, meaning that the list of values passed to the agent depend
(agent-defined)
on which system and resource they are intended.
The number of values in the ArgListValues should not exceed 425. This requirement becomes
a consideration if an attribute in the ArgList is a keylist, a vector, or an association. Such type
of non-scalar attributes can typically take any number of values, and when they appear in the
ArgList, the agent has to compute ArgListValues from the value of such attributes. If the non-scalar
attribute contains many values, it will increase the size of ArgListValues. Hence when developing
an agent, this consideration should be kept in mind when adding a non-scalar attribute in the
ArgList. Users of the agent need to be notified that the attribute should not be configured to be
so large that it pushes that number of values in the ArgListValues attribute to be more than 425.
Resource Description
attributes
AutoStart Indicates if a resource should be brought online as part of a service group online, or if it needs
the hares -online command.
(user-defined)
For example, you have two resources, R1 and R2. R1 and R2 are in group G1. R1 has an
AutoStart value of 0, R2 has an AutoStart value of 1.
Brings only R2 to an ONLINE state. The group state is ONLINE and not a PARTIAL state. R1
remains OFFLINE.
Resources with a value of zero for AutoStart, contribute to the group's state only in their ONLINE
state and not for their OFFLINE state.
ComputeStats Indicates to agent framework whether or not to calculate the resource’s monitor statistics.
(user-defined) ■ Type and dimension: boolean-scalar
■ Default: 0
ConfidenceLevel Indicates the level of confidence in an online resource. Values range from 0–100. Note that
some VCS agents may not take advantage of this attribute and may always set it to 0. Set the
(agent-defined)
level to 100 if the attribute is not used.
Critical Indicates whether a fault of this resource should trigger a failover of the entire group or not. If
Critical is 0 and no parent above has Critical = 1, then the resource fault will not cause group
(user-defined)
failover.
Resource Description
attributes
(user-defined) If a resource is created dynamically while VCS is running, you must enable the resource before
VCS monitors it. For more information on how to add or enable resources, see the chapters on
administering VCS from the command line and graphical user interfaces.
Flags Provides additional information for the state of a resource. Primarily this attribute raises flags
pertaining to the resource. Values:
(system use only)
ADMIN WAIT—The running configuration of a system is lost.
RESTARTING —The agent is attempting to restart the resource because the resource was
detected as offline in latest monitor cycle unexpectedly. See RestartLimit attribute for more
information.
STATE UNKNOWN—The latest monitor call by the agent could not determine if the resource
was online or offline.
MONITOR TIMEDOUT —The latest monitor call by the agent was terminated because it exceeded
the maximum time specified by the static attribute MonitorTimeout.
UNABLE TO OFFLINE—The agent attempted to offline the resource but the resource did not
go offline. This flag is also set when a resource faults and the clean function completes
successfully, but the subsequent monitor hangs or is unable to determine resource status.
Group String name of the service group to which the resource belongs.
Resource Description
attributes
IState The internal state of a resource. In addition to the State attribute, this attribute shows to which
state the resource is transitioning. Values:
(system use only)
NOT WAITING—Resource is not in transition.
WAITING TO GO ONLINE—Agent notified to bring the resource online but procedure not yet
complete.
WAITING TO GO OFFLINE—Agent notified to take the resource offline but procedure not yet
complete.
WAITING TO GO OFFLINE (path) - Agent notified to take the resource offline but procedure
not yet complete. When the procedure completes, the resource’s children which are a member
of the path in the dependency tree will also be offline.
WAITING FOR PARENT OFFLINE – Resource waiting for parent resource to go offline. When
parent is offline the resource is brought offline.
Note: Although this attribute accepts integer types, the command line indicates the text
representations.
VCS attributes 659
Resource attributes
Resource Description
attributes
■ WAITING FOR OFFLINE VALIDATION (migrate) – This state is applicable for resource on
source system and indicates that migration operation has been accepted and VCS is validating
whether migration is possible.
■ WAITING FOR MIGRATION OFFLINE – This state is applicable for resource on source
system and indicates that migration operation has passed the prerequisite checks and
validations on the source system.
■ WAITING TO COMPLETE MIGRATION – This state is applicable for resource on source
system and indicates that migration process is complete on the source system and the VCS
engine is waiting for the resource to come online on target system.
IStates on the target system for migration operations:
■ WAITING FOR ONLINE VALIDATION (migrate) – This state is applicable for resource on
target system and indicates that migration operations are accepted and VCS is validating
whether migration is possible.
■ WAITING FOR MIGRATION ONLINE – This state is applicable for resource on target system
and indicates that migration operation has passed the prerequisite checks and validations
on the source system.
■ WAITING TO COMPLETE MIGRATION (online) – This state is applicable for resource on
target system and indicates that migration process is complete on the source system and
the VCS engine is waiting for the resource to come online on target system.
LastOnline Indicates the system name on which the resource was last online. This attribute is set by VCS.
Resource Description
attributes
ManageFaults Specifies whether VCS responds to a resource fault by calling the Clean entry point.
(user-defined)
Its value supersedes all the values assigned to the attribute at service group level.
This attribute can take the following values:
■ ACT: VCS invokes the Clean function with CleanReason set to Online Hung.
■ IGNORE: VCS changes the resource state to ONLINE|ADMIN_WAIT.
■ NULL (Blank): VCS takes action based on the values set for the attribute at the service group
level.
Default value: “”
MonitorMethod Specifies the monitoring method that the agent uses to monitor the resource:
Default: Traditional
MonitorOnly Indicates if the resource can be brought online or taken offline. If set to 0, resource can be
brought online or taken offline. If set to 1, resource can only be monitored.
(system use only)
Note: This attribute can only be affected by the command hagrp -freeze.
MonitorTimeStats Valid keys are Average and TS. Average is the average time taken by the monitor function over
the last Frequency number of monitor cycles. TS is the timestamp indicating when the engine
(system use only)
updated the resource’s Average value.
Resource Description
attributes
Path Set to 1 to identify a resource as a member of a path in the dependency tree to be taken offline
on a specific system after a resource faults.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: 0
Probed Indicates whether the state of the resource has been determined by the agent by running the
monitor function.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: 0
ResourceInfo This attribute has three predefined keys: State: values are Valid, Invalid, or Stale. Msg: output
of the info agent function of the resource on stdout by the agent framework. TS: timestamp
(system use only)
indicating when the ResourceInfo attribute was updated by the agent framework
ResourceOwner This attribute is used for VCS email notification and logging. VCS sends email notification to the
person that is designated in this attribute when events occur that are related to the resource.
(user-defined)
Note that while VCS logs most events, not all events trigger notifications. VCS also logs the
owner name when certain events occur.
Make sure to set the severity level at which you want notifications to be sent to ResourceOwner
or to at least one recipient defined in the SmtpRecipients attribute of the NotifierMngr agent.
ResourceRecipients This attribute is used for VCS email notification. VCS sends email notification to persons
designated in this attribute when events related to the resource occur and when the event's
(user-defined)
severity level is equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to be sent to
ResourceRecipients or to at least one recipient defined in the SmtpRecipients attribute of the
NotifierMngr agent.
Resource Description
attributes
Signaled Indicates whether a resource has been traversed. Used when bringing a service group online
or taking it offline.
(system use only)
■ Type and dimension: integer-association
■ Default: Not applicable.
Start Indicates whether a resource was started (the process of bringing it online was initiated) on a
system.
(system use only)
■ Type and dimension: integer -scalar
■ Default: 0
State Resource state displays the state of the resource and the flags associated with the resource.
(Flags are also captured by the Flags attribute.) This attribute and Flags present a comprehensive
(system use only)
view of the resource’s current state. Values:
ONLINE
OFFLINE
FAULTED
OFFLINE|MONITOR TIMEDOUT
OFFLINE|STATE UNKNOWN
OFFLINE|ADMIN WAIT
ONLINE|RESTARTING
ONLINE|MONITOR TIMEDOUT
ONLINE|STATE UNKNOWN
ONLINE|UNABLE TO OFFLINE
ONLINE|ADMIN WAIT
FAULTED|MONITOR TIMEDOUT
FAULTED|STATE UNKNOWN
Default: 0
VCS attributes 663
Resource attributes
Resource Description
attributes
If a trigger is enabled but the trigger path at the service group level and at the resource level is
"" (default), VCS invokes the trigger from the $VCS_HOME/bin/triggers directory.
The TriggerPath value is case-sensitive. VCS does not trim the leading spaces or trailing spaces
in the Trigger Path value. If the path contains leading spaces or trailing spaces, the trigger might
fail to get executed. The path that you specify is relative to $VCS_HOME and the trigger path
defined for the service group.
ServiceGroupTriggerPath/Resource/Trigger
If TriggerPath for service group sg1 is mytriggers/sg1 and TriggerPath for resource res1 is "",
you must store the trigger script in the $VCS_HOME/mytriggers/sg1/res1 directory. For example,
store the resstatechange trigger script in the $VCS_HOME/mytriggers/sg1/res1 directory. Yon
can manage triggers for all resources for a service group more easily.
If TriggerPath for resource res1 is mytriggers/sg1/vip1 in the preceding example, you must store
the trigger script in the $VCS_HOME/mytriggers/sg1/vip1 directory. For example, store the
resstatechange trigger script in the $VCS_HOME/mytriggers/sg1/vip1 directory.
Modification of TriggerPath value at the resource level does not change the TriggerPath value
at the service group level. Likewise, modification of TriggerPath value at the service group level
does not change the TriggerPath value at the resource level.
TriggerResRestart Determines whether or not to invoke the resrestart trigger if resource restarts.
If this attribute is enabled at the group level, the resrestart trigger is invoked irrespective of the
value of this attribute at the resource level.
Resource Description
attributes
TriggerResState Determines whether or not to invoke the resstatechange trigger if the resource changes state.
Change
See “About the resstatechange event trigger” on page 431.
(user-defined)
If this attribute is enabled at the group level, then the resstatechange trigger is invoked irrespective
of the value of this attribute at the resource level.
(user-defined) Triggers are disabled by default. You can enable specific triggers on all nodes or only on selected
nodes. Valid values are RESFAULT, RESNOTOFF, RESSTATECHANGE, RESRESTART, and
RESADMINWAIT.
To enable triggers on a specific node, add trigger keys in the following format:
To enable triggers on all nodes in the cluster, add trigger keys in the following format:
The resadminwait trigger and resnotoff trigger are enabled on all nodes.
■ Type and dimension: string-keylist
■ Default: {}
For information about the AdvDbg attribute, see the Cluster Server Agent Developer's
Guide.
AgentClass Indicates the scheduling class for the VCS agent process.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
AgentDirectory Complete path of the directory in which the agent binary and scripts are located.
(user-defined) Agents look for binaries and scripts in the following directories:
If none of the above directories exist, the agent does not start.
Use this attribute in conjunction with the AgentFile attribute to specify a different location
or different binary for the agent.
AgentFailedOn A list of systems on which the agent for the resource type has failed.
AgentFile Complete name and path of the binary for an agent. If you do not specify a value for
this attribute, VCS uses the agent binary at the path defined by the AgentDirectory
(user-defined)
attribute.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
Default: 0
AgentReplyTimeout The number of seconds the engine waits to receive a heartbeat from the agent before
restarting the agent.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 130 seconds
AgentStartTimeout The number of seconds after starting the agent that the engine waits for the initial agent
"handshake" before restarting the agent.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 60 seconds
VCS attributes 667
Resource type attributes
AlertOnMonitorTimeouts When a monitor times out as many times as the value or a multiple of the value specified
by this attribute, then VCS sends an SNMP notification to the user. If this attribute is
(user-defined)
set to a value, say N, then after sending the notification at the first monitor timeout,
Note: This attribute can be VCS also sends an SNMP notification at each N-consecutive monitor timeout including
overridden. the first monitor timeout for the second-time notification.
ArgList An ordered list of attributes whose values are passed to the open, close, online, offline,
monitor, clean, info, and action functions.
(user-defined)
■ Type and dimension: string-vector
■ Default: Not applicable.
AttrChangedTimeout Maximum time (in seconds) within which the attr_changed function must complete or
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
CleanRetryLimit Number of times to retry the clean function before moving a resource to ADMIN_WAIT
state. If set to 0, clean is re-tried indefinitely.
(user-defined)
The valid values of this attribute are in the range of 0-1024.
CleanTimeout Maximum time (in seconds) within which the clean function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
VCS attributes 668
Resource type attributes
CloseTimeout Maximum time (in seconds) within which the close function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
ConfInterval When a resource has remained online for the specified time (in seconds), previous
faults and restart attempts are ignored by the agent. (See ToleranceLimit and
(user-defined)
RestartLimit attributes for details.)
Note: This attribute can be
■ Type and dimension: integer-scalar
overridden.
■ Default: 600 seconds
ContainerOpts Specifies information that passes to the agent that controls the resources. These values
are only effective when you set the ContainerInfo service group attribute.
(system use only)
■ RunInContainer
When the value of the RunInContainer key is 1, the agent function (entry point) for
that resource runs inside of the local container.
When the value of the RunInContainer key is 0, the agent function (entry point) for
that resource runs outside the local container (in the global environment).
■ PassCInfo
When the value of the PassCInfo key is 1, the agent function receives the container
information that is defined in the service group’s ContainerInfo attribute. An example
use of this value is to pass the name of the container to the agent. When the value
of the PassCInfo key is 0, the agent function does not receive the container
information that is defined in the service group’s ContainerInfo attribute.
EPClass Enables you to control the scheduling class for the agent functions (entry points) other
than the online entry point whether the entry point is in C or scripts.
(user-defined)
The following values are valid for this attribute:
■ RT (Real Time)
■ TS (Time Sharing)
■ -1—indicates that VCS does not use this attribute to control the scheduling class
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
EPPriority Enables you to control the scheduling priority for the agent functions (entry points)
other than the online entry point. The attribute controls the agent function priority
(user-defined)
whether the entry point is in C or scripts.
The following values are valid for this attribute:
■ 0—indicates the default priority value for the configured scheduling class as given
by the EPClass attribute for the operating system.
■ Greater than 0—indicates a value greater than the default priority for the operating
system. Veritas recommends a value of greater than 0 for this attribute. A system
that has a higher load requires a greater value.
■ -1—indicates that VCS does not use this attribute to control the scheduling priority
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
ExternalStateChange Defines how VCS handles service group state when resources are intentionally brought
online or taken offline outside of VCS control.
(user-defined)
The attribute can take the following values:
Note: This attribute can be
overridden. OnlineGroup: If the configured application is started outside of VCS control, VCS
brings the corresponding service group online.
FaultOnMonitorTimeouts When a monitor times out as many times as the value specified, the corresponding
resource is brought down by calling the clean function. The resource is then marked
(user-defined)
FAULTED, or it is restarted, depending on the value set in the RestartLimit attribute.
Note: This attribute can be
When FaultOnMonitorTimeouts is set to 0, monitor failures are not considered indicative
overridden.
of a resource fault. A low value may lead to spurious resource faults, especially on
heavily loaded systems.
FaultPropagation Specifies if VCS should propagate the fault up to parent resources and take the entire
service group offline when a resource faults.
(user-defined)
The value 1 indicates that when a resource faults, VCS fails over the service group, if
Note: This attribute can be
the group’s AutoFailOver attribute is set to 1. The value 0 indicates that when a resource
overridden.
faults, VCS does not take other resources offline, regardless of the value of the Critical
attribute. The service group does not fail over on resource fault.
FireDrill Specifies whether or not fire drill is enabled for the resource type. If the value is:
IMF Determines whether the IMF-aware agent must perform intelligent resource monitoring.
You can also override the value of this attribute at resource-level.
(user-defined)
Type and dimension: integer-association
Note: This attribute can be
overridden. This attribute includes the following keys:
■ Mode
Define this attribute to enable or disable intelligent resource monitoring.
Valid values are as follows:
■ 0—Does not perform intelligent resource monitoring
■ 1—Performs intelligent resource monitoring for offline resources and performs
poll-based monitoring for online resources
■ 2—Performs intelligent resource monitoring for online resources and performs
poll-based monitoring for offline resources
■ 3—Performs intelligent resource monitoring for both online and for offline
resources
■ MonitorFreq
This key value specifies the frequency at which the agent invokes the monitor agent
function. The value of this key is an integer.
You can set this attribute to a non-zero value in some cases where the agent
requires to perform poll-based resource monitoring in addition to the intelligent
resource monitoring. See the Cluster Server Bundled Agents Reference Guide for
agent-specific recommendations.
After the resource registers with the IMF notification module, the agent calls the
monitor agent function as follows:
■ After every (MonitorFreq x MonitorInterval) number of seconds for online
resources
■ After every (MonitorFreq x OfflineMonitorInterval) number of seconds for offline
resources
■ RegisterRetryLimit
If you enable IMF, the agent invokes the imf_register agent function to register the
resource with the IMF notification module. The value of the RegisterRetyLimit key
determines the number of times the agent must retry registration for a resource. If
the agent cannot register the resource within the limit that is specified, then intelligent
monitoring is disabled until the resource state changes or the value of the Mode
key changes.
IMFRegList An ordered list of attributes whose values are registered with the IMF notification
module.
(user-defined)
■ Type and dimension: string-vector
■ Default: Not applicable.
VCS attributes 672
Resource type attributes
InfoInterval Duration (in seconds) after which the info function is invoked by the agent framework
for ONLINE resources of the particular resource type.
(user-defined)
If set to 0, the agent framework does not periodically invoke the info function. To
manually invoke the info function, use the command hares -refreshinfo. If the value
you designate is 30, for example, the function is invoked every 30 seconds for all
ONLINE resources of the particular resource type.
IntentionalOffline Defines how VCS reacts when a configured application is intentionally stopped outside
of VCS control.
(user-defined)
Add this attribute for agents that support detection of an intentional offline outside of
VCS control. Note that the intentional offline feature is available for agents registered
as V51 or later.
The value 0 instructs the agent to register a fault and initiate the failover of a service
group when the supported resource is taken offline outside of VCS control.
The value 1 instructs VCS to take the resource offline when the corresponding
application is stopped outside of VCS control.
InfoTimeout Timeout value for info function. If function does not complete by the designated time,
the agent framework cancels the function’s thread.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 30 seconds
LevelTwoMonitorFreq Specifies the frequency at which the agent for this resource type must perform
second-level or detailed monitoring.
(user-defined)
Type and dimension: integer-scalar
Default: 0
VCS attributes 673
Resource type attributes
LogDbg Indicates the debug severities enabled for the resource type or agent framework. Debug
severities used by the agent functions are in the range of DBG_1–DBG_21. The debug
(user-defined)
messages from the agent framework are logged with the severities DBG_AGINFO,
DBG_AGDEBUG and DBG_AGTRACE, representing the least to most verbose.
The LogDbg attribute can be overridden. Using the LogDbg attribute, you can set
DBG_AGINFO, DBG_AGTRACE, and DBG_AGDEBUG severities at the resource
level, but it does not have an impact as these levels are agent-type specific. Veritas
recommends to set values between DBG_1 to DBG_21 at resource level using the
LogDbg attribute.
LogFileSize Specifies the size (in bytes) of the agent log file. Minimum value is 64 KB. Maximum
value is 134217728 bytes (128MB).
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 33554432 (32 MB)
LogViaHalog Enables the log of all the entry points to be logged either in the respective agent log
file or the engine log file based on the values configured.
(user-defined)
■ 0: The agent’s log goes into the respective agent log file.
■ 1: The C/C++ entry point’s logs goes into the agent log file and the script entry
point’s logs goes into the engine log file using the halog command.
Type: boolean-scalar
Default: 0
VCS attributes 674
Resource type attributes
MigrateWaitLimit Number of monitor intervals to wait for a resource to migrate after the migrating
procedure is complete. MigrateWaitLimit is applicable for the source and target node
(user-defined)
because the migrate operation takes the resource offline on the source node and brings
the resource online on the target node. You can also define MigrateWaitLimit as the
number of monitor intervals to wait for the resource to go offline on the source node
after completing the migrate procedure and the number of monitor intervals to wait for
the resource to come online on the target node after resource is offline on the source
node.
Probes fired manually are counted when MigrateWaitLimit is set and the resource is
waiting to migrate. For example, if the MigrateWaitLimit of a resource is set to 5 and
the MonitorInterval is set to 60 (seconds), the resource waits for a maximum of five
monitor intervals (that is, 5 x 60), and if all five monitors within MigrateWaitLimit report
the resource as online on source node, it sets the ADMIN_WAIT flag. If you run another
probe, the resource waits for four monitor intervals (that is, 4 x 60), and if the fourth
monitor does not report the state as offline on source, it sets the ADMIN_WAIT flag.
This procedure is repeated for 5 complete cycles. Similarly, if resource not moved to
online state within the MigrateWaitLimit then it sets the ADMIN_WAIT flag.
MigrateTimeout Maximum time (in seconds) within which the migrate procedure must complete or else
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 600 seconds
MonitorInterval Duration (in seconds) between two consecutive monitor calls for an ONLINE or
transitioning resource.
(user-defined)
Note: Note: The value of this attribute for the MultiNICB type must be less than its
Note: This attribute can be
value for the IPMultiNICB type. See the Cluster Server Bundled Agents Reference
overridden.
Guide for more information.
A low value may impact performance if many resources of the same type exist. A high
value may delay detection of a faulted resource.
MonitorStatsParam Stores the required parameter values for calculating monitor time statistics.
Frequency: The number of monitor cycles after which the average monitor cycle time
should be computed and sent to the engine. If configured, the value for this attribute
must be between 1 and 30. The value 0 indicates that the monitor cycle ti me should
not be computed. Default=0.
ExpectedValue: The expected monitor time in milliseconds for all resources of this
type. Default=100.
MonitorTimeout Maximum time (in seconds) within which the monitor function must complete or else
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
NumThreads Number of threads used within the agent process for managing resources. This number
does not include threads used for other internal purposes.
(user-defined)
If the number of resources being managed by the agent is less than or equal to the
NumThreads value, only that many number of threads are created in the agent. Addition
of more resources does not create more service threads. Similarly deletion of resources
causes service threads to exit. Thus, setting NumThreads to 1 forces the agent to just
use 1 service thread no matter what the resource count is. The agent framework limits
the value of this attribute to 30.
OfflineMonitorInterval Duration (in seconds) between two consecutive monitor calls for an OFFLINE resource.
If set to 0, OFFLINE resources are not monitored.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
VCS attributes 676
Resource type attributes
OfflineTimeout Maximum time (in seconds) within which the offline function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
OfflineWaitLimit Number of monitor intervals to wait for the resource to go offline after completing the
offline procedure. Increase the value of this attribute if the resource is likely to take a
(user-defined)
longer time to go offline.
Note: This attribute can be
Probes fired manually are counted when OfflineWaitLimit is set and the resource is
overridden.
waiting to go offline. For example, say the OfflineWaitLimit of a resource is set to 5 and
the MonitorInterval is set to 60. The resource waits for a maximum of five monitor
intervals (five times 60), and if all five monitors within OfflineWaitLimit report the resource
as online, it calls the clean agent function. If the user fires a probe, the resource waits
for four monitor intervals (four times 60), and if the fourth monitor does not report the
state as offline, it calls the clean agent function. If the user fires another probe, one
more monitor cycle is consumed and the resource waits for three monitor intervals
(three times 60), and if the third monitor does not report the state as offline, it calls the
clean agent function.
OnlineClass Enables you to control the scheduling class for the online agent function (entry point).
This attribute controls the class whether the entry point is in C or scripts.
(user-defined)
The following values are valid for this attribute:
■ RT (Real Time)
■ TS (Time Sharing)
■ -1—indicates that VCS does not use this attribute to control the scheduling class
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
OnlinePriority Enables you to control the scheduling priority for the online agent function (entry point).
This attribute controls the priority whether the entry point is in C or scripts.
(user-defined)
The following values are valid for this attribute:
■ 0—indicates the default priority value for the configured scheduling class as given
by the OnlineClass for the operating system.
Veritas recommends that you set the value of the OnlinePriority attribute to 0.
■ Greater than 0—indicates a value greater than the default priority for the operating
system.
■ -1—indicates that VCS does not use this attribute to control the scheduling priority
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
OnlineRetryLimit Number of times to retry the online operation if the attempt to online a resource is
unsuccessful. This parameter is meaningful only if the clean operation is implemented.
(user-defined)
Note: This attribute can be ■ Type and dimension: integer-scalar
overridden. ■ Default: 0
OnlineTimeout Maximum time (in seconds) within which the online function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
VCS attributes 678
Resource type attributes
OnlineWaitLimit Number of monitor intervals to wait for the resource to come online after completing
the online procedure. Increase the value of this attribute if the resource is likely to take
(user-defined)
a longer time to come online.
Note: This attribute can be
Each probe command fired from the user is considered as one monitor interval. For
overridden.
example, say the OnlineWaitLimit of a resource is set to 5. This means that the resource
will be moved to a faulted state after five monitor intervals. If the user fires a probe,
then the resource will be faulted after four monitor cycles, if the fourth monitor does
not report the state as ONLINE. If the user again fires a probe, then one more monitor
cycle is consumed and the resource will be faulted if the third monitor does not report
the state as ONLINE.
OpenTimeout Maximum time (in seconds) within which the open function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
Operations Indicates valid operations for resources of the resource type. Values are OnOnly (can
online only), OnOff (can online and offline), None (cannot online or offline).
(user-defined)
■ Type and dimension: string-scalar
■ Default: OnOff
RestartLimit Number of times to retry bringing a resource online when it is taken offline unexpectedly
and before VCS declares it FAULTED.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 0
VCS attributes 679
Resource type attributes
ScriptClass Indicates the scheduling class of the script processes (for example, online) created by
the agent.
(user-defined)
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
ScriptPriority Indicates the priority of the script processes created by the agent.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
SourceFile File from which the configuration is read. Do not configure this attribute in main.cf.
(user-defined) Make sure the path exists on all nodes before running a command that configures this
attribute.
SupportedOperations Indicates the additional operations for a resource type or an agent. Only migrate
keyword is supported.
(user-defined)
■ Type and dimension: string-keylist
■ Default: {}
ToleranceLimit After a resource goes online, the number of times the monitor function should return
OFFLINE before declaring the resource FAULTED.
(user-defined)
A large value could delay detection of a genuinely faulted resource.
Note: This attribute can be
overridden. ■ Type and dimension: integer-scalar
■ Default: 0
TypeOwner This attribute is used for VCS notification. VCS sends notifications to persons designated
in this attribute when an event occurs related to the agent's resource type. If the agent
(user-defined)
of that type faults or restarts, VCS send notification to the TypeOwner. Note that while
VCS logs most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to be sent to
TypeOwner or to at least one recipient defined in the SmtpRecipients attribute of the
NotifierMngr agent.
TypeRecipients The email-ids set in the TypeRecipients attribute receive email notification for events
related to a specific agent. There are only two types of events related to an agent for
(user-defined)
which notifications are sent:
AdministratorGroups List of operating system user account groups that have administrative
privileges on the service group.
(user-defined)
This attribute applies to clusters running in secure mode.
Authority Indicates whether or not the local cluster is allowed to bring the
service group online. If set to 0, it is not, if set to 1, it is. Only one
(user-defined)
cluster can have this attribute set to 1 for a specific global group.
AutoClearCount Indicates the number of attempts that the VCS engine made to clear
the state of the service group that has faulted and does not have a
(System use only)
failover target. This attribute is used only if the AutoClearLimit
attribute is set for the service group.
AutoClearInterval Indicates the interval in seconds after which a service group that has
faulted and has no failover target is cleared automatically. The state
(user-defined)
of the service group is cleared only if AutoClearLimit is set to a
non-zero value.
Default: 0
AutoclearLimit Defines the number of attempts to be made to clear the Faulted state
of a service group. Disables the auto-clear feature when set to zero.
(user-defined)
VCS attributes 682
Service group attributes
AutoDisabled Indicates that VCS does not know the status of a service group (or
specified system for parallel service groups). This could occur
(system use only)
because the group is not probed (on specified system for parallel
groups) in the SystemList attribute. Or the VCS engine is not running
on a node designated in the SystemList attribute, but the node is
visible.
When VCS does not know the status of a service group on a node
but you want VCS to consider the service group enabled, perform
this command to change the AutoDisabled value to 0.
■ 0—Autorestart is disabled.
■ 1—Autorestart is enabled.
■ 2—When a faulted persistent resource recovers from a fault, the
VCS engine clears the faults on all non-persistent faulted
resources on the system. It then restarts the service group.
AutoStartList List of systems on which, under specific conditions, the service group
will be started with VCS (usually at system boot). For example, if a
(user-defined)
system is a member of a failover service group’s AutoStartList
attribute, and if the service group is not already running on another
system in the cluster, the group is brought online when the system
is started.
AutoStartPolicy Sets the policy VCS uses to determine the system on which a service
group is brought online during an autostart operation if multiple
(user-defined)
systems exist.
Possible values:
1: Capacity is reserved.
To list this attribute, use the -all option with the hagrp -display
command.
ClusterFailOverPolicy Determines how a global service group behaves when a cluster faults
or when a global group faults. The attribute can take the following
(user-defined)
values:
ClusterList Specifies the list of clusters on which the service group is configured
to run.
(user-defined)
■ Type and dimension: integer-association
■ Default: {} (none)
VCS attributes 686
Service group attributes
ContainerInfo Specifies if you can use the service group with the container. Assign
the following values to the ContainerInfo attribute:
(user-defined)
■ Name: The name of the container.
■ Type: The type of container. You can set this to WPAR.
■ Enabled: Specify the value as 1 to enable the container. Specify
the value as 0 to disable the container. Specify the value as 2 to
enable failovers from physical computers to virtual machines and
from virtual machines to physical computers. Refer to the Veritas
InfoScale 7.2 Virtualization Guide for more information on use of
value 2 for the Enabled key.
You can set a per-system value or a global value for this attribute.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and the -global
option, see the man pages associated with the hagrp command.
DisableFaultMessages Suppresses fault and failover messages, for a group and its
resources, from getting logged in the VCS engine log file. This
(user-defined)
attribute does not suppress the information messages getting logged
in the log file.
The attribute can take the following values:
■ 0 - Logs all the fault and failover messages for the service group
and its resources.
■ 1 - Disables the fault and failover messages of the service groups,
but continues to log resource messages.
■ 2 - Disables the fault and failover messages of the service group
resources, but continues to log service group messages.
■ 3 - Disables the fault and failover messages of both service
groups and its resources.
DeferAutoStart Indicates whether HAD defers the auto-start of a global group in the
local cluster in case the global cluster is not fully connected.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: Not applicable
(user-defined) The attribute can have global or local scope. If you define local
(system-specific) scope for this attribute, VCS prevents the service
group from coming online on specified systems that have a value of
0 for the attribute. You can use this attribute to prevent failovers on
a system when performing maintenance on the system.
Evacuating Indicates the node ID from which the service group is being
evacuated.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
VCS attributes 688
Service group attributes
EvacList Contains list of pairs of low priority service groups and the systems
on which they will be evacuated.
(system use only)
For example:
EvacuatingForGroup Displays the name of the high priority service group for which
evacuation is in progress. The service group name is visible only as
(system use only)
long as the evacuation is in progress.
FailOverPolicy Defines the failover policy used by VCS to determine the system to
which a group fails over. It is also used to determine the system on
(user-defined)
which a service group has been brought online through manual
operation.
The policy is defined only for clusters that contain multiple systems:
FromQ Indicates the system name from which the service group is failing
over. This attribute is specified when service group failover is a direct
(system use only)
consequence of the group event, such as a resource fault within the
group or a group switch.
Frozen Disables all actions, including autostart, online and offline, and
failover, except for monitor actions performed by agents. (This
(system use only)
convention is observed by all agents supplied with VCS.)
GroupOwner This attribute is used for VCS email notification and logging. VCS
sends email notification to the person designated in this attribute
(user-defined)
when events occur that are related to the service group. Note that
while VCS logs most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to
be sent to GroupOwner or to at least one recipient defined in the
SmtpRecipients attribute of the NotifierMngr agent.
GroupRecipients This attribute is used for VCS email notification. VCS sends email
notification to persons designated in this attribute when events related
(user-defined)
to the service group occur and when the event's severity level is
equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to
be sent to GroupRecipients or to at least one recipient defined in the
SmtpRecipients attribute of the NotifierMngr agent.
Guests List of operating system user accounts that have Guest privileges
on the service group.
(user-defined)
This attribute applies to clusters running in secure mode.
(system use only) VCS sets this attribute to 1 if an attempt has been made to bring the
service group online.
For failover groups, VCS sets this attribute to 0 when the group is
taken offline.
For parallel groups, it is set to 0 for the system when the group is
taken offline or when the group faults and can fail over to another
system.
VCS sets this attribute to 2 for service groups if VCS attempts to
autostart a service group; for example, attempting to bring a service
group online on a system from AutoStartList.
IntentionalOnlineList Lists the nodes where a resource that can be intentionally brought
online is found ONLINE at first probe. IntentionalOnlineList is used
(system use only)
along with AutoStartList to determine the node on which the service
group should go online when a cluster starts.
LastSuccess Indicates the time when service group was last brought online.
When the cluster attribute Statistics is not enabled, the allowed key
value is Units.
■ You cannot change this attribute when the service group attribute
CapacityReserved is set to 1 in the cluster and when the
FailOverPolicy is set to BiggestAvailable. This is because the
VCS engine reserves system capacity based on the service group
attribute Load.
When the service group's online transition completes and after
the next forecast cycle, CapacityReserved is reset.
■ If the FailOverPolicy is set to BiggestAvailable for a service group,
the attribute Load must be specified with at least one of the
following keys:
■ CPU
■ Mem
■ Swap
VCS attributes 693
Service group attributes
ManageFaults Specifies if VCS manages resource failures within the service group
by calling the Clean function for the resources. This attribute can
(user-defined)
take the following values.
NONE—VCS does not call the Clean function for any resource in
the group. You must manually handle resource faults.
MeterWeight Represents the weight given for the cluster attribute’s HostMeters
key to determine a target system for a service group when more than
(user-defined)
one system meets the group attribute’s Load requirements.
MigrateQ Indicates the system from which the service group is migrating. This
attribute is specified when group failover is an indirect consequence
(system use only)
(in situations such as a system shutdown or another group faults
and is linked to this group).
OnlineClearParent When this attribute is enabled for a service group and the service
group comes online or is detected online, VCS clears the faults on
all online type parent groups, such as online local, online global, and
online remote.
For example, assume that both the parent group and the child group
faulted and both cannot failover. Later, when VCS tries again to bring
the child group online and the group is brought online or detected
online, the VCS engine clears the faults on the parent group, allowing
VCS to restart the parent group too.
OnlineRetryInterval Indicates the interval, in seconds, during which a service group that
has successfully restarted on the same system and faults again
(user-defined)
should be failed over, even if the attribute OnlineRetryLimit is
non-zero. This prevents a group from continuously faulting and
restarting on the same system.
OnlineRetryLimit If non-zero, specifies the number of times the VCS engine tries to
restart a faulted service group on the same system on which the
(user-defined)
group faulted, before it gives up and tries to fail over the group to
another system.
OperatorGroups List of operating system user groups that have Operator privileges
on the service group. This attribute applies to clusters running in
(user-defined)
secure mode.
Operators List of VCS users with privileges to operate the group. A Group
Operator can only perform online/offline, and temporary
(user-defined)
freeze/unfreeze operations pertaining to a specific group.
PathCount Number of resources in path not yet taken offline. When this number
drops to zero, the engine may take the entire service group offline if
(system use only)
critical fault has occurred.
■ The value 0 indicates that the service group is not part of the
hagrp -online -propagate operation or the hagrp
-offline -propagate operation.
■ The value 1 indicates that the service group is part of the hagrp
-online -propagate operation.
■ The value 2 indicates that the service group is part of the hagrp
-offline -propagate operation.
PreOnline Indicates that the VCS engine should not bring online a service group
in response to a manual group online, group autostart, or group
(user-defined)
failover. The engine should instead run the PreOnline trigger.
You can set a local (per-system) value or a global value for this
attribute. A per-system value enables you to control the firing of
PreOnline triggers on specific nodes in the cluster.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and the -global
option, see the man pages associated with the hagrp command.
PreOnlining Indicates that VCS engine invoked the preonline script; however, the
script has not yet returned with group online.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
VCS attributes 697
Service group attributes
PreonlineTimeout Defines the maximum amount of time in seconds the preonline script
takes to run the command hagrp -online -nopre for the group.
(user-defined)
Note that HAD uses this timeout during evacuation only. For example,
when a user runs the command hastop -local -evacuate and the
Preonline trigger is invoked on the system on which the service
groups are being evacuated.
If you set the value as 1, the VCS engine looks for any resource in
the service group that supports PreSwitch action. If the action is not
defined for any resource, the VCS engine switches a service group
normally.
If the action is defined for one or more resources, then the VCS
engine invokes PreSwitch action for those resources. If all the actions
succeed, the engine switches the service group. If any of the actions
fail, the engine aborts the switch operation.
The engine invokes the PreSwitch action in parallel and waits for all
the actions to complete to decide whether to perform a switch
operation. The VCS engine reports the action’s output to the engine
log. The PreSwitch action does not change the configuration or the
cluster state.
PreSwitching Indicates that the VCS engine invoked the agent’s PreSwitch action;
however, the action is not yet complete.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
VCS attributes 699
Service group attributes
Priority Enables users to designate and prioritize the service group. VCS
does not interpret the value; rather, this attribute enables the user
(user-defined)
to configure the priority of a service group and the sequence of
actions required in response to a particular event.
VCS assigns the following node weight based on the priority of the
service group:
Probed Indicates whether all enabled resources in the group have been
detected by their respective agents.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: Not applicable
VCS attributes 700
Service group attributes
SourceFile File from which the configuration is read. Do not configure this
attribute in main.cf.
(user-defined)
Make sure the path exists on all nodes before running a command
that configures this attribute.
Make sure the path exists on all nodes before configuring this
attribute.
SystemList List of systems on which the service group is configured to run and
their priorities. Lower numbers indicate a preference for the system
(user-defined)
as a failover target.
Note: You must define this attribute prior to setting the AutoStartList
attribute.
SystemZones Indicates the virtual sublists within the SystemList attribute that grant
priority in failing over. Values are string/integer pairs. The string key
(user-defined)
is the name of a system in the SystemList attribute, and the integer
is the number of the zone. Systems with the same zone number are
members of the same zone. If a service group faults on one system
in a zone, it is granted priority to fail over to another system within
the same zone, despite the policy granted by the FailOverPolicy
attribute.
TargetCount Indicates the number of target systems on which the service group
should be brought online.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable.
ToQ Indicates the node name to which the service is failing over. This
attribute is specified when service group failover is a direct
(system use only)
consequence of the group event, such as a resource fault within the
group or a group switch.
(user-defined) If a trigger is enabled but the trigger path is "" (default), VCS invokes
the trigger from the $VCS_HOME/bin/triggers directory. If you specify
an alternate directory, VCS invokes the trigger from that path. The
value is case-sensitive. VCS does not trim the leading spaces or
trailing spaces in the Trigger Path value. If the path contains leading
spaces or trailing spaces, the trigger might fail to get executed.
$VCS_HOME/TriggerPath/Trigger
TriggerResFault Defines whether VCS invokes the resfault trigger when a resource
faults. The value 0 indicates that VCS does not invoke the trigger.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 1
(user-defined) Triggers are disabled by default. You can enable specific triggers on
all nodes or on selected nodes. Valid values are VIOLATION,
NOFAILOVER, PREONLINE, POSTONLINE, POSTOFFLINE,
RESFAULT, RESSTATECHANGE, and RESRESTART.
To enable triggers on all nodes in the cluster, add trigger keys in the
following format:
TriggersEnabled = {POSTOFFLINE, POSTONLINE}
The postoffline trigger and postonline trigger are enabled on all nodes.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and the -global
option, see the man pages associated with the hagrp command.
VCS attributes 706
Service group attributes
To list this attribute, use the -all option with the hagrp -display
command.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and -global option,
see the man pages associated with the hagrp command.
UserIntGlobal Use this attribute for any purpose. It is not used by VCS.
UserStrGlobal VCS uses this attribute in the ClusterService group. Do not modify
this attribute in the ClusterService group. Use the attribute for any
(user-defined)
purpose in other service groups.
UserIntLocal Use this attribute for any purpose. It is not used by VCS.
UserStrLocal Use this attribute for any purpose. It is not used by VCS.
System attributes
Table D-4 lists the system attributes.
(system use only) The function of this attribute depends on the value of the
cluster-level attribute Statistics. If the value of the Statistics
is:
ConfigDiskState State of configuration on the disk when the system joined the
cluster.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
(system use only) CurrentLimits = Limits - (additive value of all service group
Prerequisites).
FencingWeight Indicates the system priority for preferred fencing. This value
is relative to other systems in the cluster and does not reflect
(user-defined)
any real value associated with a particular system.
LicenseType Indicates the license type of the base VCS key used by the
system. Possible values are:
(system use only)
0—DEMO
1—PERMANENT
2—PERMANENT_NODE_LOCK
3—DEMO_NODE_LOCK
4—NFR
5—DEMO_EXTENSION
6—NFR_NODE_LOCK
7—DEMO_EXTENSION_NODE_LOCK
Where the value UP for nic1 means there is at least one peer
in the cluster that is visible on nic1.
Where the value DOWN for nic2 means no peer in the cluster
is visible on nic2.
SystemOwner Use this attribute for VCS email notification and logging. VCS
sends email notification to the person designated in this
(user-defined)
attribute when an event occurs related to the system. Note
that while VCS logs most events, not all events trigger
notifications.
SystemRecipients This attribute is used for VCS email notification. VCS sends
email notification to persons designated in this attribute when
(user-defined)
events related to the system occur and when the event's
severity level is equal to or greater than the level specified
in the attribute.
(system use only) Down (0): System is powered off, or GAB and LLT are not
running on the system.
UserInt Stores integer values you want to use. VCS does not interpret
the value of this attribute.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 0
1—L3+ is enabled
Cluster attributes
Table D-5 lists the cluster attributes.
AdministratorGroups List of operating system user account groups that have administrative privileges on
the cluster. This attribute applies to clusters running in secure mode.
(user-defined)
■ Type and dimension: string-keylist
■ Default: ""
AutoClearQ Lists the service groups scheduled to be auto-cleared. It also indicates the time at
which the auto-clear for the group will be performed.
(System use only)
VCS attributes 723
Cluster attributes
AutoStartTimeout If the local cluster cannot communicate with one or more remote clusters, this attribute
specifies the number of seconds the VCS engine waits before initiating the AutoStart
(user-defined)
process for an AutoStart global service group.
AutoAddSystemtoCSG Indicates whether the newly joined or added systems in cluster become part of the
SystemList of the ClusterService service group if the service group is configured. The
(user-defined)
value 1 (default) indicates that the new systems are added to SystemList of
ClusterService. The value 0 indicates that the new systems are not added to SystemList
of ClusterService.
BackupInterval Time period in minutes after which VCS backs up the configuration files if the
configuration is in read-write mode.
(user-defined)
The value 0 indicates VCS does not back up configuration files. Set this attribute to
at least 3.
See “Scheduling automatic backups for VCS configuration files” on page 109.
(system defined) VCS populates this attribute once the engine passes an hacf-generated snapshot to
it. This happens when VCS is about to go to a RUNNING state from the LOCAL_BUILD
state.
Once VCS receives the snapshot from the engine, it reads the file
/etc/vx/.uuids/clusuuid file. VCS uses the file’s contents as the value for the CID
attribute. The clusuuid file’s first line must not be empty. If the file does not exists or
is empty VCS then exits gracefully and throws an error.
A node that joins a cluster in the RUNNING state receives the CID attribute as part
of the REMOTE_BUILD snapshot. Once the node has joined completely, it receives
the snapshot. The node reads the file /etc/vx/.uuids/clusuuid to compare the value
that it received from the snapshot with value that is present in the file. If the value does
not match or if the file does not exist, the joining node exits gracefully and does not
join the cluster.
See “Configuring and unconfiguring the cluster UUID value” on page 155.
You cannot change the value of this attribute with the haclus –modify command.
ClusterAddress Specifies the cluster’s virtual IP address (used by a remote cluster when connecting
to the local cluster).
(user-defined)
■ Type and dimension: string-scalar
■ Default: ""
ClusterOwner This attribute used for VCS notification. VCS sends notifications to persons designated
in this attribute when an event occurs related to the cluster. Note that while VCS logs
(user-defined)
most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to be sent to
ClusterOwner or to at least one recipient defined in the SmtpRecipients attribute of
the NotifierMngr agent.
ClusterRecipients This attribute is used for VCS email notification. VCS sends email notification to
persons designated in this attribute when events related to the cluster occur and when
(user-defined)
the event's severity level is equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to be sent to
ClusterRecipients or to at least one recipient defined in the SmtpRecipients attribute
of the NotifierMngr agent.
ClusterTime The number of seconds since January 1, 1970. This is defined by the lowest node in
running state.
(system use only)
■ Type and dimension: string-scalar
■ Default: Not applicable
CompareRSM Indicates if VCS engine is to verify that replicated state machine is consistent. This
can be set by running the hadebug command.
(system use only)
■ Type and dimension: integer-scalar
■ Default: 0
ConnectorState Indicates the state of the wide-area connector (wac). If 0, wac is not running. If 1, wac
is running and communicating with the VCS engine.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable.
VCS attributes 726
Cluster attributes
CounterInterval Intervals counted by the attribute GlobalCounter indicating approximately how often
a broadcast occurs that will cause the GlobalCounter attribute to increase.
(user-defined)
The default value of the GlobalCounter increment can be modified by changing
CounterInterval. If you increase this attribute to exceed five seconds, consider
increasing the default value of the ShutdownTimeout attribute.
CounterMissAction Specifies the action that must be performed when the GlobalCounter is not updated
for CounterMissTolerance times the CounterInterval. Possible values are LogOnly
(user-defined)
and Trigger. If you set CounterMissAction to LogOnly, the system logs the message
in Engine Log and Syslog. If you set CounterMissAction to Trigger, the system invokes
a trigger which has default action of collecting the comms tar file.
CounterMissTolerance Specifies the time interval that can lapse since the last update of GlobalCounter before
VCS reports an issue. If the GlobalCounter does not update within
(user-defined)
CounterMissTolerance times CounterInterval, VCS reports the issue. Depending on
the CounterMissAction.value, appropriate action is performed.
CredRenewFrequency The number of days after which the VCS engine renews its credentials with the
authentication broker. For example, the value 5 indicates that credentials are renewed
(user-defined)
every 5 days; the value 0 indicates that credentials are not renewed.
DeleteOnlineResource Defines whether you can delete online resources. Set this value to 1 to enable deletion
of online resources. Set this value to 0 to disable deletion of online resources.
(user-defined)
You can override this behavior by using the -force option with the hares -delete
command.
DumpingMembership Indicates that the engine is writing or dumping the configuration to disk.
RT, TS
EnableVMAutoDiscovery Enables or disables auto discovery of virtual machines. By default, auto discovery of
virtual machines is disabled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 0
EnablePBF Enables or disables priority based failover. When set to 1 (one), VCS gives priority to
the online of high priority service group, by ensuring that its Load requirement is met
(user-defined)
on the system.
EnginePriority The priority in which HAD runs. Generally, a greater priority value indicates higher
scheduling priority. A range of priority values is assigned to each scheduling class.
(user-defined)
For more information on the range of priority values, see the operating system
documentation.
EngineShutdown Defines the options for the hastop command. The attribute can assume the following
values:
(user-defined)
Enable—Process all hastop commands. This is the default behavior.
DisableClusStop—Do not process the hastop -all command; process all other hastop
commands.
PromptLocal—Prompt for user confirmation before running the hastop -local command;
reject all other hastop commands.
PromptAlways—Prompt for user confirmation before running any hastop command.
FipsMode Indicates whether FIPS mode is enabled for the cluster. The value depends on the
mode of the broker on the system. If FipsMode is set to 1, FIPS mode is enabled. If
(system use only)
FipsMode is set to 0, FIPS mode is disabled.
GlobalCounter This counter increases incrementally by one for each counter interval. It increases
when the broadcast is received.
(system use only)
VCS uses the GlobalCounter attribute to measure the time it takes to shut down a
system. By default, the GlobalCounter attribute is updated every five seconds. This
default value, combined with the 600-second default value of the ShutdownTimeout
attribute, means if system goes down within 120 increments of GlobalCounter, it is
treated as a fault. Change the value of the CounterInterval attribute to modify the
default value of GlobalCounter increment.
Guests List of operating system user accounts that have Guest privileges on the cluster.
GuestGroups List of operating system user groups that have Guest privilege on the cluster.
DefaultGuestAccess Indicates whether any authenticated user should have guest access to the cluster by
default. The default guest access can be:
(user-defined)
■ 0: Guest access for privileged users only.
■ 1: Guest access for everyone.
HostAvailableMeters Lists the meters that are available for measuring system resources. You cannot
configure this attribute in the main.cf file.
(System use only)
■ Type and dimension: string-association
Keys are the names of parameters and values are the names of meter libraries.
■ Default: HostAvailableMeters = { CPU = “libmeterhost_cpu.so”, Mem =
“libmeterhost_mem.so”, Swap = “libmeterhost_swap.so”}
VCS attributes 730
Cluster attributes
HostMeters Indicates the parameters (CPU, Mem, or Swap) that are currently metered in the
cluster.
(user-defined)
■ Type and dimension: string-keylist
■ Default: HostMeters = {“CPU”, “Mem”, “Swap”}
You can configure this attribute in the main.cf file. You cannot modify the value at
run time.
The keys must be one or more from CPU, Mem, or Swap.
LockMemory Controls the locking of VCS engine pages in memory. This attribute has the following
values. Values are case-sensitive:
(user-defined)
ALL: Locks all current and future pages.
LogClusterUUID Enables or disables logging of the cluster UUID in each log message. By default,
cluster UUID is not logged.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 0
MeterControl Indicates the intervals at which metering and forecasting for the system attribute
AvailableCapacity are done for the keys specified in HostMeters.
(user-defined)
■ Type and dimension: integer-association
This attribute includes the following keys:
■ MeterInterval
Frequency in seconds at which metering is done by the HostMonitor agent. The
value for this key can equal or exceed 30. The default value is 120 indicating that
the HostMonitor agent meters available capacity and updates the System attribute
AvailableCapacity every 120 seconds. The HostMonitor agent checks for changes
in the available capacity for every monitoring cycle and when there is a change,
the HostMonitor agent updates the values in the same monitoring cycle . The
MeterInterval value applies only if Statistics is set to Enabled or MeterHostOnly.
■ ForecastCycle
The number of metering cycles after which forecasting of available capacity is
done. The value for this key can equal or exceed 1. The default value is 3 indicating
that forecasting of available capacity is done after every 3 metering cycles.
Assuming the default MeterInterval value of 120 seconds, forecasting is done after
360 seconds or 6 minutes. The ForecastCycle value applies only if Statistics is set
to Enabled.
You can configure this attribute in main.cf. You cannot modify the value at run time.
The values of MeterInterval and ForecastCycle apply to all keys of HostMeters.
You can configure this attribute in main.cf; if configured in main.cf, then it must contain
units for all the keys as specified in HostMeters. You cannot modify the value at run
time.
When Statistics is set to Enabled then service group attribute Load, and the following
system attributes are represented in corresponding units for parameters such as CPU,
Mem, or Swap:
■ AvailableCapacity
■ HostAvailableForecast
■ Capacity
■ ReservedCapacity
The values of keys such as Mem and Swap can be represented in MB or GB, and
CPU can be represented in CPU, MHz or GHz.
VCS attributes 732
Cluster attributes
MeterWeight Indicates the default meter weight for the service groups in the cluster. You can
configure this attribute in the main.cf file, but you cannot modify the value at run time.
(user-defined)
If the attribute is defined in the main.cf file, it must have at least one key defined. The
weight for the key must be in the range of 0 to 10. Only keys from HostAvailableMeters
are allowed in this attribute.
(system use only) State—Current state of notifier, such as whether or not it is connected to VCS.
Host—The host on which notifier is currently running or was last running. Default =
None
Severity—The severity level of messages queued by VCS for notifier. Values include
Information, Warning, Error, and SevereError. Default = Warning
Queue—The size of queue for messages queued by VCS for notifier.
OpenExternalCommunicationPort Indicates whether communication over the external communication port for VCS is
allowed or not. By default, the external communication port for VCS is 14141.
(user-defined)
■ Type and dimension: string-scalar
■ Valid values: YES, NO
■ Default: YES
■ YES: The external communication port for VCS is open.
■ NO: The external communication port for VCS is not open.
Note: When the external communication port for VCS is not open, RemoteGroup
resources created by the RemoteGroup agent and users created by the hawparsetup
command cannot access VCS.
OperatorGroups List of operating system user groups that have Operator privileges on the cluster.
PanicOnNoMem Indicate the action that you want VCS engine (HAD) to take if it cannot receive
messages from GAB due to low-memory.
(user-defined)
■ If the value is 0, VCS exits with warnings.
■ If the value is 1, VCS calls the GAB library routine to panic the system.
■ Default: 0
PreferredFencingPolicy The I/O fencing race policy to determine the surviving subcluster in the event of a
network partition. Valid values are Disabled, System, Group, or Site.
Disabled: Preferred fencing is disabled. The fencing driver favors the subcluster with
maximum number of nodes during the race for coordination points.
System: The fencing driver gives preference to the system that is more powerful than
others in terms of architecture, number of CPUs, or memory during the race for
coordination points. VCS uses the system-level attribute FencingWeight to calculate
the node weight.
Group: The fencing driver gives preference to the node with higher priority service
groups during the race for coordination points. VCS uses the group-level attribute
Priority to determine the node weight.
Site: The fencing driver gives preference to the node with higher site priority during
the race for coordination points. VCS uses the site-level attribute Preference to
determine the node weight.
ProcessClass Indicates the scheduling class processes created by the VCS engine. For example,
triggers.
(user-defined)
■ Type and dimension: string-scalar
■ Default = TS
ProcessPriority The priority of processes created by the VCS engine. For example triggers.
SecInfo Enables creation of secure passwords, when the SecInfo attribute is added to the
main.cf file with the security key as the value of the attribute.
(user-defined)
■ Type and dimension: string-scalar
■ Default: ""
SecureClus Indicates whether the cluster runs in secure mode. The value 1 indicates the cluster
runs in secure mode. This attribute cannot be modified when VCS is running.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 0
You can configure a site from Veritas InfoScale Operations Manager . This attribute
will be automatically set to 1 when configured using Veritas InfoScale Operations
Manager. If site information is not configured for some nodes in the cluster, those
nodes are placed under a default site that has the lowest preference.
SourceFile File from which the configuration is read. Do not configure this attribute in main.cf.
(user-defined) Make sure the path exists on all nodes before running a command that configures
this attribute.
Statistics Indicates if statistics gathering is enabled and whether the FailOverPolicy can be set
to BiggestAvailable.
(user-defined)
■ Type and dimension: string-scalar
■ Default: Enabled
You cannot modify the value at run time.
Possible values are:
■ Enabled: The HostMonitor agent meters host utilization and forecasts the
available capacity for the systems in the cluster. With this value set,
FailOverPolicy for any service group cannot be set to Load.
■ MeterHostOnly: The HostMonitor agent meters host utilization but it does not
forecast the available capacity for the systems in the cluster. The service group
attribute FailOverPolicy cannot be set to BiggestAvailable.
■ Disabled: The HostMonitor agent is not started. Both metering of host utilization
and forecasting of available capacity are disabled. The service group attribute
FailOverPolicy cannot be set to BiggestAvailable.
Stewards The IP address and hostname of systems running the steward process.
SystemRebootAction Determines whether frozen service groups are ignored on system reboot.
If the SystemRebootAction value is "", VCS tries to take all service groups offline.
Because VCS cannot be gracefully stopped on a node where a frozen service group
is online, applications on the node might get killed.
Note: The SystemRebootAction attribute applies only on system reboot and system
shutdown.
(user-defined) The value SCSI3 indicates that the cluster uses either disk-based or server-based I/O
fencing. The value NONE indicates it does not use either.
UserNames List of VCS users. The installer uses admin as the default user name.
VCSFeatures Indicates which VCS features are enabled. Possible values are:
1—L3+ is enabled
(system use only) Even though the VCSMode is an integer attribute, when you query the value with the
haclus -value command or the haclus -display command, it displays as the string
UNKNOWN_MODE for value 0 and VCS for value 7.
WACPort The TCP port on which the wac (Wide-Area Connector) process on the local cluster
listens for connection from remote clusters. Type and dimension: integer-scalar
(user-defined)
■ Default: 14155
Heartbeat Definition
Attributes
Arguments List of arguments to be passed to the agent functions. For the Icmp
agent, this attribute can be the IP address of the remote cluster.
(user-defined)
■ Type and dimension: string-vector
■ Default: ""
AYARetryLimit The maximum number of lost heartbeats before the agent reports that
heartbeat to the cluster is down.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 3
VCS attributes 738
Heartbeat attributes (for global clusters)
Heartbeat Definition
Attributes
AYATimeout The maximum time (in seconds) that the agent will wait for a heartbeat
AYA function to return ALIVE or DOWN before being canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 30
CleanTimeOut Number of seconds within which the Clean function must complete or
be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
InitTimeout Number of seconds within which the Initialize function must complete
or be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
StartTimeout Number of seconds within which the Start function must complete or
be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
StopTimeout Number of seconds within which the Stop function must complete or
be canceled without stopping the heartbeat.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
VCS attributes 739
Remote cluster attributes
DeclaredState Specifies the declared state of the remote cluster after its
cluster state is transitioned to FAULTED.
(user-defined)
See “Disaster declaration” on page 627.
■ Disaster
■ Outage
■ Disconnect
■ Replica
1—L3+ is enabled
(system use only) Even though the VCSMode is an integer attribute, when
you query the value with the haclus -value command or the
haclus -display command, it displays as the string
UNKNOWN_MODE for value 0 and VCS for value 7.
Site attributes
Table D-8 lists the site attributes.
VCS attributes 743
Site attributes
Site Definition
Attributes
1 10000
2 1000
3 100
4 10