Veritas Install Admin
Veritas Install Admin
Veritas Install Admin
Prepared by:
11/26/2017
Page 1
Confidential Strictly for Internal use
1.0 Before you start
Before we start the installation, we need to have the admin work station and
terminal concentrator functional.
1) Setup all the hardware like CPUs, Memory, etc for all Admin Workstation
Follow the safety rules for static charge sensitive devices.
2) For the private network you will require 2 cross cables. No connect one
private line to qfe0 and the other to qfe1.
3) For public network connect one cable to hme0 and the other to ce1.
4) Connect the public Ethernet ports to the switch / hub provided for the
same.
5) Install the latest recommended patches including Kernel Patches for the
OS. Enable remote root logging by editing /etc/default/login.
Here we end with the Admin Workstation configuration. Next few steps are
to configure the terminal concentrator.
Connect one end of TCE cable (DB25) to the Serial port of Admin Work
station & other end (RJ45) to the TCE.
Power on the TCE.
On the Utra5 Workstation type
#tip 9600 /dev/ttya.
Press the test button on the TCE, a monitor: prompt appears. Make the
entries as given below.
monitor: :addr
Enter IP: :10.100.1.2 (IP address for TCE)
Subnet : :255.255.255.0
Load host internet address: : 0.0.0.0. (NA)
Broadcast address: :0.0.0.0
Select type of IP encapsulation: : Ethernet
Load Broadcast [N]: :
monitor: : seq
Enter interface seq : self
monitor: :image
11/26/2017
Page 2
Confidential Strictly for Internal use
image name: :(type enter)
TFTP Load wire: :( type enter)
TFTP Dump path/filename: (type enter)
monitor: :image
monitor: :~ (to come out of the monitor prompt)
Power off the TCE & Power it on again
On Admin WorkStation
#telnet 10.1.200.2 (TCE)
Enter the following as shown below
Enter Annex port: cli (type cli to enter into the menu)
Annex : su
Password : same as ip address i.e 10.1.200.2
annex # admin
admin: set port = all mode slave
admin: set port = all type dial-in
admin: set port = all imask_7bits y
admin: set port = all input_flow_control start/stop
admin: set port = all output_flow_control none
admin: reset annex all
admin: reset 1-8
admin: quit
annex # boot (reboot the TCE, configuration of TCE is now complete).
or specify the required port number after issuing #telnet TC ( Telnet TC will
get you the TC s prompt for the port to which it should establish a connection
)
Power on both the cluster nodes, and access their console by typing
11/26/2017
Page 3
Confidential Strictly for Internal use
#telnet TC <specific_port_no> on the AWS terminal. Once the system
controller runs a POST, it would prompt a menu as below.
This will do a POST of the system boards & other controller boards and I/O
boards and take you to the OBP (OK>)
2. Connect the VCS private Ethernet controllers on each system. Use cross-over
Ethernet
cables (supported only on two systems), or independent hubs, for each VCS
communication network. Ensure hubs are powered from separate sources. On
each
system, use two independent network cards to provide redundancy.
During the process of setting up heartbeat connections, note that a chance for
data
corruption exists if a failure removes all communications between the systems
and
still leaves the systems running and capable of accessing shared storage.
Public Network
Private Network Private
Public Network
Network
Hubs
Private network setups: two-node cluster and four-node cluster
3. Configure the Ethernet devices used for the private network such that the
auto-negotiation protocol is not used. This helps ensure a more stable
configuration
11/26/2017
Page 4
Confidential Strictly for Internal use
with cross-over cables.
You can do this in one of two ways: by editing the /etc/system .le to disable
auto-negotiation on all Ethernet devices system-wide, or by creating a qfe.conf .le
in the /kernel/drv directory to disable auto-negotiation for the individual
devices
used for private network. Refer to the Sun Ethernet driver product
documentation for
information on these methods to configure device driver parameters.
LLT uses its own protocol, and does not use TCP/IP. Therefore, to ensure the
private
network connections are used only for LLT communication and not for TCP/IP
trafic,
unplumb and unconfigure the temporary addresses after testing.
Sun SPARC systems provide the following console-abort sequences that enable
you to halt
and continue the processor:
11/26/2017
Page 5
Confidential Strictly for Internal use
press L1-A or STOP-A on the keyboard,
or,
press BREAK on the serial console input device.
Each command can then followed by a response of go at the ok prompt to
enable the
system to continue.
VCS does not support continuing operations after the processor has been
stopped by the
abort sequence because data corruption may result. Speci.cally, when a system is
halted
with the abort sequence it stops producing heartbeats. The other systems in the
cluster
then consider the system failed and take over its services. If the system is later
enabled
with go, it continues writing to shared storage as before, even though its
applications
have been restarted on other systems.
In Solaris 2.6, Sun introduced support for disabling the abort sequence. We
recommend
disabling the keyboard-abort sequence on systems running Solaris 2.6 or greater.
To do
this:
1. Add the following line to the /etc/default/kbd .le (create the .le if it does not
exist):
KEYBOARD_ABORT=disable
2. Reboot.
11/26/2017
Page 6
Confidential Strictly for Internal use
VCS is a licensed software product. The installvcs utility prompts you for a
license key for
each system. You cannot use your VERITAS software product until you have
completed
the licensing process. Use either method described in the following two sections
to obtain
a valid license key.
Follow the appropriate instructions on the vLicense web site to obtain your
license key
depending on whether you are a new or previous user of vLicense:
1. Access the web site at http://vlicense.veritas.com.
2. Log in or create a new login, as necessary.
3. Follow the instructions on the pages as they are displayed.
When you receive the generated license key, you can proceed with installation.
11/26/2017
Page 7
Confidential Strictly for Internal use
1. Log in as root user on a system connected by the network to the systems where
VCS is
to be installed. The system from which VCS is installed need not be part of the
cluster.
2. Insert the CD with the VCS software into a drive connected to the system.
- If you are running Solaris volume-management software, the software
automatically mounts the CD as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
- If you are not running Solaris volume-management software, you must mount
the CD manually. For example:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
Where, in this example, /dev/dsk/c0t6d0s2 is the name for the CD drive.
# cd /cdrom/cluster_server
5. The utility verifies that the systems you specify can communicate via ssh or
rsh. If
ssh binaries are found, the program confirms that ssh is set up to operate without
requests for passwords or passphrases.
Checking for ssh on north ........................ .. not found
Checking OS version on north ........................ SunOS 5.8
Verifying communication with south ............ ping successful
Attempting rsh with south ...................... rsh successful
Checking OS version on south ........................ SunOS 5.8
Creating /tmp subdirectory on south.. /tmp subdirectory created
Using /usr/bin/rsh and /usr/bin/rcp to communicate with south
Communication check completed successfully
Licensing VCS
6. The installation utility verifies the license status of each system. If a VCS
license is
found on the system, you are prompted about whether you want to use that
license or
enter a new license.
If no license is found, or you would like to enter a new license, enter a license key
11/26/2017
Page 8
Confidential Strictly for Internal use
when prompted.
VCS licensing verification:
Checking INBLRPPS140 ........................ No license key found
Enter the license key for north: XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Valid VCS Single Node Permanent key entered for INBLRPPS140
Registering key XXXX-XXXX-XXXX-XXXX-XXXX-XXX on INBLRPPS140 .. Done
Checking INBLRPPS141 ........................ No license key found
Enter the license key for INBLRPPS141: XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Valid VCS Single Node Permanent key entered for INBLRPPS141
Registering key XXXX-XXXX-XXXX-XXXX-XXXX-XXX on INBLRPPS141 .. Done
Proceed when VCS licensing completes successfully.
Starting Installation
8. When you are prompted, you can have the installation of the software begin.
Are you ready to start the Cluster installation now? (Y)
9. For each system, the installation program checks if any of the packages to be
installed
are already present.
Checking current installation on INBLRPPS140:
Checking VRTSvcsw ..........................Not installed
Checking VRTSweb ...........................Not installed
10. For each system, the installation utility checks for the necessary file system
space. If
the required space is not available, the installation terminates. The utility checks
for
the existence of separate volumes or partitions for /opt and /usr. If they do not
exist, or if the space in them is insufficient, the utility checks for the space in /,
using
the space there if it is available
Page 9
Confidential Strictly for Internal use
Checking /opt ................ No /opt volume or partition
Checking /usr ................ No /usr volume or partition
Checking /var ................... required space available
Checking / ...................... required space available
File system verification completed successfully
11. For each system, the installation utility checks that VCS processes are not
running on the systems. Any running VCS processes are stopped or killed by
installvcs.
Note If you are running VxFS version 3.4 and have not installed the VxFS 3.4
Patch 2, GAB or LLT may not stop properly and you may have to reboot your
system to
install VCS successfully.
Stopping VCS processes on INBLRPPS140:
Checking VCS .................................... not running
Checking hashadow ............................... not running
Checking CmdServer .............................. not running
Checking Web GUI processes....................... not running
Checking notifier processes ..................... not running
Checking GAB .................................... not running
Checking LLT .................................... not running
Stopping VCS processes on INBLRPPS141:
Checking VCS .................................... not running
Checking hashadow ............................... not running
Checking CmdServer .............................. not running
Checking Web GUI processes....................... not running
Checking notifier processes ..................... not running
Checking GAB .................................... not running
Checking LLT .................................... not running
VCS processes are stopped
13. The installation program prompts you for the cluster name and identification:
Enter the unique Cluster Name: vcs_cluster2
Enter the unique Cluster ID number between 0-255: 7
11/26/2017
Page 10
Confidential Strictly for Internal use
14. After you enter the cluster ID number, the program discovers and lists all
NICs on the
first system:
Discovering NICs on north: ..... discovered hme0 qfe0 qfe1
qfe2 qfe3
Enter the NIC for the first private network heartbeat link on
north: (hme0 qfe0 qfe1 qfe2 qfe3) qfe0
Enter the NIC for the second private network heartbeat link on
north: (hme0 qfe1 qfe2 qfe3) qfe1
Are you using the same NICs for private heartbeat links on all
systems? (Y)
If you answer N, the program prompts for the NICs of each system.
15. The installation program asks you to verify the cluster information:
Cluster information verification:
Cluster Name: vcs_cluster2
Cluster ID Number: 7
Private Network Heartbeat Links for north: link1=qfe0
link2=qfe1
Private Network Heartbeat Links for south: link1=qfe0
link2=qfe1
Is this information correct? (Y)
If you enter N, the program returns to the screen describing the installation
requirements (step 12).
Installing VCS
26. After you have verified that the information you have entered is correct, the
installation program begins installing the packages on the first system:
Installing VCS on INBLRPPS140:
Installing VRTSvlic package ........................... Done
Installing VRTSperl package ........................... Done
Installing VRTSllt package ............................ Done
Installing VRTSgab package ............................ Done
Installing VRTSvcs package ............................ Done
Installing VRTSvcsmg package .......................... Done
Installing VRTSvcsag package .......................... Done
Installing VRTSvcsdc package .......................... Done
Installing VRTSvcsmn package .......................... Done
Installing VRTSweb package ............................ Done
Installing VRTSvcsw package ........................... Done
The same packages are installed on each machine in the cluster:
Installing VCS on INBLRPPS141:
11/26/2017
Page 11
Confidential Strictly for Internal use
Copying VRTSvlic binaries ............................. Done
Installing VRTSvlic package ........................... Done
Copying VRTSperl binaries ............................. Done
Installing VRTSperl package ........................... Done
.
.
Package installation completed successfully
Configuring VCS
27. The installation program continues by creating configuration files and
copying them
to each system:
Configuring VCS ...................................... Done
Copying VCS configuration files to north.............. Done
Copying VCS configuration files to south.............. Done
Configuration files copied successfully
Starting VCS
28. You can now start VCS and its components on each system:
Do you want to start the cluster components now? (Y)
Starting VCS on INBLRPPS140
Starting LLT ...................................... Started
Starting GAB ...................................... Started
Starting VCS ...................................... Started
Starting VCS on INBLRPPS141
Starting LLT ...................................... Started
Starting GAB ...................................... Started
Starting VCS ...................................... Started
VCS Startup completed successfully
29. When VCS installation completes, the installation program reports on:
- The backup configuration files named with the extension init.name_of_cluster.
For example, the file /etc/llttab in the installation example has a backup file
named:
/etc/llttab.init.vcs_cluster2
- The URL and the login information for Cluster Manager (Web Console), if you
chose to configure it. Typically this resembles:
http://10.180.88.199:8181/vcs
You can access the Web Console using the User Name: admin and the
Password: password. Use the /opt/VRTSvcs/bin/hauser command to add
new users.
- The location of a VCS installation report, which is named
/var/VRTSvcs/installvcsReport.name_of_cluster. For example:
/var/VRTSvcs/installvcsReport.vcs_cluster2
- Additional instructions which may be necessary to complete startup or
11/26/2017
Page 12
Confidential Strictly for Internal use
installation on other systems.
- VCS installation completes.
VCS installation completed successfully
Once the Installation of Veritas Cluster is completed the service groups are created as
per the requirement as given Below
The utility halic installs a new permanent license, or updates a demo license,
while HAD is
running. You must have root privileges to use this utility. This utility must be
run on each
system in the cluster: it cannot install or update a license on remote nodes.
The variable key represents the license key to be installed on the local system.
Note The utility halic must be run on each system in the cluster.
11/26/2017
Page 13
Confidential Strictly for Internal use
configuration is valid, and if no other system is running VCS, it builds its state
from the
local configuration file and enters the RUNNING state.
Note that -stale and -force are optional. The option -stale instructs the engine to
treat the local configuration as stale even if it is valid. The option -force instructs
the
engine to treat a stale, but otherwise valid, local configuration as valid.
If all systems are in ADMIN_WAIT, enter the following command from any system in
the
cluster to force VCS to use the configuration file from the system specified by the
variable
system:
# hasys -force system
When VCS is stopped on a system without using the -force option to hastop, it
enters
11/26/2017
Page 14
Confidential Strictly for Internal use
the LEAVING state, and waits for all groups to go offline on the system. Use the
output of
the command hasys -display system to verify that the values of the SysState and
the
OnGrpCnt attributes are non-zero. VCS continues to wait for the service groups
to go
offline before it shuts down. See Troubleshooting Resources on page 349 for
more
information.
When VCS is stopped by options other than -force on a system with online
service
groups, the groups running on the system are taken offline and remain offline.
This is
indicated by VCS setting the attribute IntentOnline to 0. Using the option -force
enables
service groups to continue running while HAD is brought down and restarted
(IntentOnline remains unchanged).
VCS enables you query various cluster objects, including resources, service
groups,
systems, resource types, agents, and clusters. You may enter query commands
from any
system in the cluster. Commands to display information on the VCS
configuration or
system states can be executed by all users: you do not need root privileges.
11/26/2017
Page 15
Confidential Strictly for Internal use
_ For a list of a service groups dependencies
# hagrp -dep [service_group]
Page 16
Confidential Strictly for Internal use
_ For information about a resource type
# hatype -display [resource_type]
Querying Systems
_ For a list of systems in the cluster
# hasys -list
Querying Clusters
_ For the value of a specific cluster attribute
# haclus -value attribute
Querying Status
_ For the status of all service groups in the cluster, including resources
# hastatus
_ Todisplay the status in tabular format of all systems and a specific group and
resources (or all service groups if no group is specified)
11/26/2017
Page 17
Confidential Strictly for Internal use
# hastatus [-sound] [-group service_group]
The -sound option enables a bell to ring each time a resource faults.
_ To start a service group on a system (System 1) and bring online only the
resources
already online on another system (System 2)
# hagrp -online service_group -sys system -checkpartial
other_system
If the service group does not have resources online on the other system, the
service
group is brought online on the original system and the checkpartial option is
ignored.
Note that the checkpartial option is used by the Preonline trigger during failover.
When a service group configured with Preonline =1 fails (system 1) fails over to
another system (system 2), the only resources brought online on system 1 are
those
that were previously online on system 2 prior to failover.
_ To stop a service group only if all resources are probed on the system
# hagrp -offline [-ifprobed] service_group -sys system
11/26/2017
Page 18
Confidential Strictly for Internal use
_ To freeze a service group (disable onlining, offlining, and failover)
# hagrp -freeze service_group [-persistent]
The option -persistent enables the freeze to be remembered when the cluster is
rebooted.
Administering Resources
_ To bring a resource online
# hares -online resource -sys system
Page 19
Confidential Strictly for Internal use
_ To take a resource offline and propagate the command to its children
# hares -offprop [-ignoreparent] resource -sys system
Similar to the service group -offline command, this command signals that its
children should be taken offline. This action continues to the leaves of the
resources
subtree. The option -ignoreparent enables a resource to be taken offline even if its
parent resources in the service group are online. This option does not work for
resources in a child group with firm dependencies when the parent group is
online.
When a resource with some online parent resources is taken offline using the
-ignoreparent option, the offlined resource is brought online again if its service
group fails over or is switched to another system.
_ To clear a resource
Initiate a state change from RESOURCE_FAULTED to RESOURCE_OFFLINE:
# hares -clear resource [-sys system]
Clearing a resource automatically initiates the online process previously blocked
while waiting for the resource to become clear. If system is not specified, the fault
is
cleared on each system in the service groups SystemList attribute. (See also the
service group command to clear faulted, non-persistent resources, hagrp -clear,
.)
This command clears the resources parents automatically. Persistent resources
whose
static attribute Operations is defined as None cannot be cleared with this
command
and must be physically attended to, such as replacing a raw disk. The agent then
updates the status automatically.
Administering Systems
11/26/2017
Page 20
Confidential Strictly for Internal use
_ To force a system to start while in ADMIN_WAIT
# hasys -force system
This command overwrites the configuration on systems running in the cluster.
Before
using it, verify that the current VCS configuration is valid.
Administering Clusters
_ To modify a cluster attribute
# haclus [-help [-modify]]
11/26/2017
Page 21
Confidential Strictly for Internal use
Basic Configuration Operations
Commands listed in the following sections permanently affect the configuration
of the
cluster. If the cluster is brought down with the command hastop -all or made
read-only, the main.cf file and other configuration files written to disk reflect the
updates.
11/26/2017
Page 22
Confidential Strictly for Internal use
list is an association of names and integers that represent priority values.)
You may also define a service group as parallel. To set the Parallel attribute to 1,
type the
following command. (Note that the default for this attribute is 0, which
designates the
service group as a failover group.):
# hagrp -modify groupx Parallel 1
This attribute cannot be modified if resources have already been added to the
service
group.
You can modify the attributes SystemList, AutoStartList, and Parallel only by
using the
command hagrp -modify. You cannot modify attributes created by the system,
such as
the state of the service group. If you are modifying a service group from the
command
11/26/2017
Page 23
Confidential Strictly for Internal use
line, the VCS server immediately updates the configuration of the groups
resources
accordingly.
For example, suppose you originally defined the SystemList of service group
groupx as
SystemA and SystemB. Then after the cluster was brought up you added a new
system to
the list:
# hagrp -modify groupx SystemList -add SystemC 3
The SystemList for groupx changes to SystemA, SystemB, SystemC, and an entry
for
SystemC is created in the groups resource attributes, which are stored on a per-
system
basis. These attributes include information regarding the state of the resource on
a
particular system.
Next, suppose you made the following modification:
# hagrp -modify groupx SystemList SystemA 1 SystemC 3 SystemD 4
Using the option -modify without other options erases the existing data and
replaces it
with new data. Therefore, after making the change above, the new SystemList
becomes
SystemA=1, SystemC=3, SystemD=4. SystemB is deleted from the system list,
and each
entry for SystemB in local attributes is removed.
Page 24
Confidential Strictly for Internal use
hasys -add, the system is not added, but adding other valid systems proceeds
normally.
Adding Resources
_ To add a resource
# hares -add resource resource_type service_group
This command creates a new resource, resource, which must be a unique name
throughout
the cluster, regardless of where it resides physically or in which service group it
is placed.
The resource type is resource_type, which must be defined in the configuration
language.
The resource belongs to the group service_group.
When new resources are created, all non-static attributes of the resources type,
plus their
default values, are copied to the new resource. Three attributes are also created
by the
system and added to the resource:
_ Critical (default = 1). If the resource or any of its children faults while online,
the
entire service group is marked faulted and failover occurs.
_ AutoStart (default = 1). If the resource is set to AutoStart, it is brought online in
response to a service group command. All resources designated as AutoStart=1
must
be online for the service group to be considered online. (This attribute is
unrelated to
AutoStart attributes for service groups.)
_ Enabled. If the resource is set to Enabled, the agent for the resources type
manages
the resource. The default is 1 for resources defined in the configuration file
main.cf,
0 for resources added on the command line.
Note Adding resources on the command line requires several steps, and the
agent must
be prevented from managing the resource until the steps are completed. For
resources defined in the configuration file, the steps are completed before the
agent
is started.
Basic Configuration Operations
110 VERITAS Cluster Server Users Guide, 3.5
11/26/2017
Page 25
Confidential Strictly for Internal use
Modifying Attributes of a New Resource
_ To modify a new resource
# hares -modify resource attribute value
The variable value depends on the type of attribute being created.
Linking Resources
_ To specify a dependency relationship, or link, between two resources
# hares -link parent_resource child_resource
The variable parent_resource depends on child_resource being online before going
online itself. Conversely, parent_resource must take itself offline before
child_resource
goes offline.
For example, before an IP address can be configured, its associated NIC must be
available, so for resources IP1 of type IP and NIC1 of type NIC, specify the
dependency as:
# hares -link IP1 NIC1
11/26/2017
Page 26
Confidential Strictly for Internal use
Deleting and Unlinking Service Groups and Resources
_ To delete a service group
# hagrp -delete service_group
_ To delete a resource
# hares -delete resource
Note that deleting a resource wont take offline the object being monitored by the
resource. The object remains online, outside the control and monitoring of VCS.
_ To unlink resources
# hares -unlink parent_resource child_resource
Note You can unlink service groups and resources at any time. You cannot delete
a
service group until all of its resources are deleted.
11/26/2017
Page 27
Confidential Strictly for Internal use
_ To add a resource attribute
# haattr -add resource_type attribute [value]
[dimension][default ...]
The variable value is a -string (default) or -integer.
The variable dimension is -scalar (default), -keylist, -association, or -vector.
The variable default is the default value of the attribute and must be compatible
with
the value and dimension. Note that this may include more than one item, as
indicated
by ellipses (...).
11/26/2017
Page 28
Confidential Strictly for Internal use