Pivot3 VSTAC Setup Guide
Pivot3 VSTAC Setup Guide
Pivot3 VSTAC Setup Guide
August 7, 2015
vSTAC Appliance Setup Guide
Purpose
This edition applies to Version 6.5.x and above of the Pivot3 vSTAC® Operating System and to any
subsequent releases until otherwise indicated in new editions.
This document contains information proprietary to Pivot3, Inc. and shall not be reproduced or transferred to
other documents or used for any purpose other than that for which it was obtained without the express
written consent of Pivot3, Inc.
How to Contact Pivot3
Pivot3, Inc. General information: [email protected]
Table of Contents
Install Pivot3 vSTAC Appliances......................................................... 4
Configure ESXi for VMware Management Access .............................. 7
Set up the vSTAC Management Station ........................................... 11
Set the iSCSI IP Addresses for ESXi Hosts ......................................... 13
Configure NIC Ports for Pivot3 Features .......................................... 16
Create a Pivot3 vSTAC Protection Group ......................................... 17
Create the VMware Datastore ........................................................ 22
Pivot3 Proactive Diagnostics (PPD) ................................................. 29
Quick Diagnostics............................................................................ 33
Maintenance Mode......................................................................... 35
Upgrade vSTAC OS on Protection Groups ........................................ 38
Management through SNMP .......................................................... 40
Shutdown Procedure ....................................................................... 43
Appendix A Using vSMS’s Command Line Interface .......................................... 44
Appendix B vSTAC Manager Status Icons and Definitions................................. 45
Appendix C Configuring Pivot3 vSMS for IPv6 Access ....................................... 47
Appendix D Configure NIC Ports for Pivot3 Features in v6.5 ............................. 49
NOTE: 10GbE is required for SAN switch connections. vSTAC versions through 6.0 may leverage 1GbE in
small to medium deployments; however, vSTAC versions 6.5 or later require 10GbE.
NOTE: Pivot3 strongly recommends using two physically separate switches for the SAN Networks, each
dedicated to a different subnet. This protects the vSTAC Protection Group from a single switch failure
and provides the most predictable performance.
Connect the vSTAC Appliance to a KVM or connect a keyboard and monitor. The monitor shows the
VMware ESXi direct console.
Press <F2> to login to the ESXi console. The default user name is root and the default password is
vSphereP3. Press <Enter>.
ESXi displays the System Customization screen. (Password can be configured whenever it is convenient.
Remember to make a note of Administrator credentials when set. User name and password can be
reset, but credentials will not be recoverable if lost or forgotten.)
Use the up and down arrow keys on the keyboard to highlight Configure Management Network. Press
<Enter>.
On the Configure Management Network screen, use the up and down arrow keys on the keyboard to
highlight IP Configuration. Press <Enter>.
If there is a DHCP server on the VMware Management Network, select the top option in the display
dialog, Use dynamic IP address & network configuration. The values for IP Address, Subnet Mask, and
Default Gateway will be set dynamically by the DHCP server. Make a note of the IP address for this host.
NOTE: If using Use dynamic IP address & network configuration, ensure that the IP address of the ESXi host
does not change on reboot. If the host IP changes on reboot, VMware management tools will not
be able to autoconnect to the ESXi host.
If there is not a DHCP server on the VMware Management Network, the values for IP Address, Subnet
Mask, and Default Gateway must all be manually entered. Select the second option on the display
console, Set static IP address and network configuration, and complete the manual entry of values for
IP Address, Subnet Mask, and Default Gateway. Make a note of this host’s IP address. Once completed,
press <Enter> to return to the Configure Management Network screen.
OPTIONAL: If the VMware Management Network is using DNS, on the Configure Management Network
screen, select DNS Configuration. Press <Enter>.
With a choice of Obtain DNS server addresses and a hostname automatically, the selection task will be
completed automatically. Simply press the Enter key to move through this screen. The other available
option, Use the following DNS server addresses and hostname, requires that the information be
entered manually and connected to host name that is unique on the network. Once complete, press
<Enter> to return to the Configure Management Network screen.
NOTE: In order for vCenter to communicate with the host, the hostname must be resolvable on the network
from vCenter, ESXi host in cluster, client PC, and any 3rd party tool that references the host by
hostname.
Press <Esc> to return to the System Customization screen. If any changes were made to the
Management Network settings, a confirmation prompt will appear. Press <Y> to apply the changes and
restart the management network.
The vSwitch1 Properties dialog is displayed. On the Ports tab, select the entry for SAN iSCSI VMK 0 and
then click the Edit… button.
The SAN iSCSI VMK 0 Properties dialog are displayed. Select the IP Settings tab. The default setting is to
obtain IP settings automatically.
To manually enter this information, click the Use the following IP settings: radio button and enter a
valid IP address and subnet mask that are useable for SAN Network 0.
WARNING: Do not edit the Default Gateway here. The iSCSI VMK will not need a unique value.
Click the OK button to exit the SAN iSCSI VMK 0 Properties dialog. Select the Close button to exit the
vSwitch1 Properties dialog.
Repeat Steps 1-9 for vSwitch2. Select SAN iSCSI VMK 1 in Step 6 and use valid IP settings for SAN
Network 1 in Step 8.
Repeat Steps 1-9 for Management NIC in vSwitch0. Select Management NIC in Step 6 and use valid IP
settings for Management NIC in Step 8.
IPv6 settings can be left as default or configured as desired for the local network.
Click the radio button next to Specify Static IP Addresses and enter Subnet Mask and IP Address.
Click Apply.
Alternatively, if there is a trusted DHCP server on the network, choose Obtain DHCP IP Address.
Click Apply.
While still under the Physical tab of the vSTAC Manager’s navigation pane, click on any desired
unassigned Appliance.
Click Create vSTAC Protection Group in the Quick Links section. (Alternatively this can be done by right-
clicking an available Appliance on the Physical tab of the navigation pane.)
Choose the desired member Appliances for the Protection Group from the available options on the next
screen. vSTAC Manager will offer as many unassigned Appliances as are available and manageable with
the current Administrator’s credentials.
NOTE: Protection Groups cannot be set up with two Appliances.
In vSTAC Manager, highlight the desired Protection Group from the Physical tab view.
NOTE: If the login credentials for any ESXi hypervisor are invalid, vSTAC Manager will report the Pivot3 VM
Failover status as Access Denied.
NOTE: Only enter vCenter credentials if vCenter is expected to be used; the checkbox must be selected in
order to enable this option. De-selecting this checkbox removes these credentials, and may affect
system functionality.
For vSTAC Manager to automatically generate IP addresses, enter Subnet Mask data and Default
Gateway if applicable. The option Auto Generate IP Addresses will activate in the Specify IP Addresses
dialog box. Click this link.
Another dialog box will open. Enter values in the Start Address box for each subnet. These values will be
assigned to the associated NICs for the first vSTAC Appliance. vSTAC Manager then increments the Host
portion of the Start Address for each subnet and sequentially assigns IP addresses to the remaining
Appliances.
Click Apply.
CAUTION: Automatic generation of IP addresses should be used ONLY when there is a contiguous block of
available IP addresses across the Protection Group. Ensure that all IP addresses are currently
unused on the subnet. Contact a network administrator for assistance.
Once all IP addresses have been entered or generated, click Finish or Next as applicable.
Verify Administrator credentials (user name and password) if applicable. vSTAC Manager will begin the
Protection Group creation process. This may take several minutes to complete; the time varies
depending on the number of Appliances.
vSTAC Manager will display the Protection Group once it has been completed.
Best Practices: Using vSTAC Manager Software
Recommended best practices:
• If using Administrator login credentials, configure a second Protection Group Administrator as a backup.
After a Protection Group is created, use vSTAC Manager to set up an additional Protection Group
Administrator. Select Administrators from the Configuration menu to add a new administrator.
• When using any vSTAC Manager Wizard, to understand why the Next or Finish button is not enabled,
mouse over the button to see a pop-up explanation.
First, create the logical volume name (called MyVolume in the example below). The name must be
unique within the Protection Group and comprised of 1-15 alpha-numeric characters.
*- RAID 1e and 5e are the only supported RAID levels available on blade servers.
Multi-Appliance Protection Group RAID Levels Supported in Maintenance Mode, recommended
RAID 6e Protects up to 3 simultaneous disk failures or 1 disk and 1 appliance failure.
Enhanced striping with disk parity. This is a valid choice for Protection Groups utilizing vSTAC
Watch or vSTAC Data Appliances with three or more Appliances. The “e” indicates enhanced.
RAID 6p Protects up to 3 simultaneous disk failures or 1 disk and 1 appliance failure.
Enhanced striping with dual disk parity. RAID 6p is a tradeoff between 1p and 6e and
provides greater random write performance than 6e along with better capacity efficiency
than 1p.
RAID 6x Protects up to 5 simultaneous disk failures or 2 disks and 1 appliance failure.
Enhanced striping with dual disk parity. This is a valid choice for Protection Groups utilizing
vSTAC Watch or vSTAC Data Appliances with three or more Appliances. The “x” indicates
expanded RAID 6 since data is also protected against five simultaneous disk failures, or the
failure of two drives and an entire Appliance.
The minimum capacity for a logical volume is 1GB. Capacity must be specified in 1GB increments. The
maximum usable capacity is displayed under the Capacity text field and is updated when the RAID Level is
changed. Changing the RAID level updates the maximum usable capacity. The Next button will be enabled or
disabled based on remaining capacity.
NOTE: To dynamically expand capacity or change the RAID level of a logical volume, these options are
available under Quick Links when the volume is selected from the Logical tab view. Access Control for
a volume may also be modified from this area.
• If desired, enable CHAP (Challenge Handshake Authentication Protocol) authentication. CHAP is an
optional security mechanism that allows network entities to perform additional authentication. If this
box is checked, iSCSI initiators must provide the correct CHAP secret when accessing the logical volume.
If this box is not checked, no CHAP secret will be required.
After entering the name and settings, click Next.
The second step is to define the access control for the volume. This step allows the option of specifying
the initial Host Identifier and its access rights. Additionally, set its CHAP secret value if CHAP is being
used.
NOTE: The Host Identifier in vSMS is not case sensitive. All iSCSI names are converted to lowercase
internally. Therefore APPSERVER, appserver, and ApPsErVeR are all considered to be the same name.
In the Host Identifier field, enter a valid identifier value; either a unique iSCSI name or the iSCSI Alias if
already configured. This value is required in a later step to configure the iSCSI initiator on each ESXi host
of the vSTAC Appliances in a Protection Group.
Next, ensure the Access field is set to Read/Write. Finally, set the CHAP secret value if CHAP is enabled.
Click Next.
NOTE: If applicable, this step is the only time that the CHAP Secret will be visible. Save this value for future
reference. The CHAP secret can be changed or deleted later.
The final step to create a logical volume is to confirm its settings. As shown in this example, the logical
volume’s capacity is 1.953 TB (3.455 TB total); additional storage will be allocated from the Protection
Group for RAID protection parity and sparing. Check the information in the Confirmation dialog, and if
editing is needed click Back and make corrections. If no editing is needed, click Finish to continue.
Storage capacity from the vSTAC Protection Group has been allocated to the logical volume and is now
ready to be configured as a VMware datastore.
The Alias field in the iSCSI Properties section of the General tab of the iSCSI Initiator Properties dialog
should now display the Host Identifier value. Select Close to close the dialog.
Click Rescan All… at the top right of the dialog. Leave both boxes checked as default. Click OK. Watch
that the scan completes in the bottom pane under Recent Tasks. When the Status shows “Completed,”
the new volume should become visible in the dialog.
Next, in the Hardware panel, click Storage. Click Add Storage. The Add Storage dialog displays. Under
Storage Type, either leave the default or select Disk/LUN and click Next. vSphere displays the Add
Storage wizard. The name of the new volume is now visible in Select Disc LUN. Highlight it. Click Next to
navigate through setting the options in this wizard.
In Disk/LUN - Formatting, the size of the logical volume created at the beginning of this section will
dictate the maximum file size available for the Datastore. Depending on preference, choose Maximum
available space or set a Custom space setting by entering a value no greater than the presented
maximum available space. The two options are equivalent if not modified.
Click Next.
The last part of the Wizard displays the Ready to Complete dialog. Review and click Finish.
Monitor the Recent Tasks pane at the bottom of the vSphere Client dialog for “Completed” to display
beside Create VMFS datastore.
What is PPD?
Pivot3 Proactive Diagnostics is an optional service that allows vSMS to report diagnostic system metadata to
Pivot3 Support. Reported information includes Appliance health, Protection Group performance, logical
volume operational errors, and vSMS reported error diagnostics. No confidential or secure data is conveyed
through this feature, and using it is optional.
Once enabled, Pivot3 Proactive Diagnostics runs as a service that starts automatically when the
Management Station is powered on.
The Pivot3 Proactive Diagnostic Service monitors the local subnets and, optionally, client-specified subnets
to monitor the vSTAC Domain accessible using the configured Administrator credentials.
When the PPD service is enabled, the status of the current vSTAC domain will be uploaded to Pivot3 Support
approximately once every 24 hours. Pivot3 will notify the registered contact when there are conditions that
may lead to potential data loss or reduced performance. The PPD service must be enabled on a computer
that will have continuous access to both the vSTAC Domain and the Internet.
A valid Proactive Diagnostic ID is required to receive notifications from Pivot3. A free PPD ID will be provided
to enable notifications on all appliances that have Pivot3 Premium Support (or better). Complete the
Contact Information tab and check the Enable box on this tab to receive a PPD ID. An email with the PPD ID
will be sent to the specified contact once eligibility has been verified. Pivot3 Support will not send out
notifications until the PPD ID has been registered in this wizard.
CAUTION: Pivot3 has no obligation to take action regarding any conditions that are detected until a valid
PPD ID associated with an active support agreement is included.
Pivot3 Proactive Diagnostics Service can initiate the customer support process to correct these issues before
they become critical.
The PPD versions monitor will also notify you when updates to the vSTAC Operating System are available
and will identify critical updates to further protect network investment and critical data.
To provide this protection, the PPD Service needs to be configured with the administrator login credentials
of the vSTAC Domain.
The PPD Service can be configured to monitor vSTAC Domains on Subnets not directly accessible to the
client executing the Pivot3 Proactive Diagnostics Service.
NOTE: Pivot3 Proactive Diagnostics is disabled by default and must be enabled before monitoring will begin.
Pivot3 Proactive Diagnostics is an optional component during vSTAC Manager installation. It is highly
recommended that this box remain checked and PPD is installed with the rest of the Management
Station configuration in Section 3 of this guide. No data is sent to Pivot3 unless PPD is also enabled after
installation.
To activate PPD, complete the installation and setup of the Management Station as instructed in Section
3.
Click on the vSTAC Domain header in the tree to the left. Then click Configuration > Proactive
Diagnostics. A dialog box will appear.
Click the first tab, Enable Proactive Diagnostics. Click the checkbox in front of “Enable Proactive
Diagnostics” at the bottom. Enter the Proactive Diagnostics ID associated with a purchased support
agreement and click Apply.
To monitor subnets not connected to the PPD client, click on the Monitored Discovery Paths Tab.
NOTE: The Monitored Discovery Paths option is not the recommended configuration for PPD.
To add remote appliances to PPD monitoring, enter the IP address for one network from the appliance
and click “Add Path.” The IP address will then show up in the “Monitored Paths” area. Repeat for each
network on each remote appliance.
To remove an unneeded IP address from PPD monitoring, highlight the IP address to be deleted in the
“Monitored Paths” area and click the Remove Path button.
The last tab in the PPD dialog is the Contact Information Tab. Any information entered into these blanks
will accompany the report sent by vSTAC Manager and provided to the Pivot3 support team.
NOTE: Pivot3 is under no obligation to notify customers of any issues except as provided by a support
agreement that includes Proactive Diagnostics and PPD ID is correctly configured.
Quick Diagnostics
Quick Diagnostics is separate from Pivot3 Proactive Diagnostics (explained in Section 8) in that the Quick
Diagnostics log file is essentially an integrated event log. These event logs are useful for troubleshooting and
diagnostics, especially under the guidance of Pivot3 Support.
This section describes how to:
• Retrieve and view a Quick Diagnostics File
• Open a previously saved Quick Diagnostics File
• Read and understand a Quick Diagnostics File
Required items for this section:
• Pivot3 vSTAC Manager installed as per Section 3.
To save a copy of the Quick Diagnostics log for further use or to send to support:
Highlight the Appliance (Assigned or Unassigned) to retrieve Quick Diagnostics on.
Click Help > Support > Save Diagnostics > Quick Diagnostics
Determine where to save the Quick Diagnostics file on the local Management Station. vSMS will
name the file according to its pre-configured naming convention and provide the option to open the
file.
To view a previously saved Quick Diagnostics file, highlight the active vSTAC Domain in vSTAC Manager.
Click View > Saved Quick Diagnostics Archive.
Navigate to where the Quick Diagnostics .tar file has been saved, and select the correct file.
Click Open.
Reading a Quick Diagnostics File
There are three degrees of event severity within two network scopes in the Quick Diagnostics log entries:
Major, Minor, and Informational for either Appliance or Protection Group. Each system event is logged and
categorized for the Quick Diagnostics Report and displayed in reverse chronological order (the most recent
event is listed first).
The top half of the Quick Diagnostics window contains all log entries for the selected component. The
bottom half of the Quick Diagnostics window gives a short explanation of the log entry.
vSTAC Manager provides view options with the arrow icon on the right of the log entries. A Quick-Save
function is also available in this area.
Maintenance Mode
What is Maintenance Mode?
Maintenance Mode is an intermediary status for vSTAC OS and vSTAC Management Suite to send a server
into while preparing the vSTAC environment for hardware or configuration changes, changing ElastiCache™
settings, or many other situations concerning vSTAC OS and / or the ESXi hypervisor. To perform externally
driven system changes, this status must be set manually through vSMS or through vSphere.
There are two functional “levels” of Maintenance Mode, and each provides access to different aspects of
the Pivot3 vSTAC Protection Group. vSTAC Manager’s Maintenance Mode provides access to vSTAC OS in
the Protection Group, and Maintenance Mode built into VMware vSphere works exclusively with ESXi
settings. vSTAC Manager and vSphere work behind the scenes to activate this status and provide maximum
transparency.
A dialog box will pop up to confirm this choice. To continue, click Yes.
NOTE: Depending on type and name of the host being sent into Maintenance Mode, this verbiage will differ
slightly.
Maintenance Mode status will be visible in both tabs of vSTAC Manager. While Maintenance Mode is
active, affected elements will be highlighted yellow and given a small icon .
Now that the host is in Maintenance Mode, install or upgrade as necessary. Other hosts in the
Protection Group will take on extra work to provide a seamless experience for the end user and I/O will
continue as needed. Writes stored on the Protection Group during Maintenance Mode will then be
written to the host during synchronization.
NOTE: Depending on RAID Level chosen, additional failures within the Protection Group during this
procedure may reboot the Protection Group and cause the host in Maintenance Mode to fail out of
the Protection Group. This situation will require a rebuild, and is the main reason Pivot3
recommends choosing the strongest RAID level available during initial setup.
Right-click the host’s IP address in vSphere and choose Enter Maintenance Mode.
The next step is to enter maintenance mode, but first power down any VMs other than the Pivot3 VM
residing on the host.
A dialog box will warn of VMs that have not yet been powered off. Click Yes and wait until the Enter
maintenance mode task reads Completed in the Recent Tasks section of vSphere.
NOTE: Enter Maintenance Mode task may hang at 100% for a few moments. Continue waiting until the
Status changes to Completed (should take no longer than 1 hour). If the task continues to hang,
check Quick Diagnostics Viewer in vSTAC Manager to begin troubleshooting.
Now that the host has fully entered Maintenance Mode, perform maintenance as needed to ESXi or
physical hardware settings.
Exit Maintenance Mode
After all maintenance has been completed, the host will need to be taken out of Maintenance Mode
through vSphere. Ensure the hardware is powered on and ready.
Right-click the host IP in vSphere and choose Exit Maintenance Mode.
vSTAC Manager should recognize when the hypervisor has released from Maintenance Mode and will
update statuses accordingly.
Click Browse and navigate to the unzipped software package procured from Pivot3.
Choose the .upg file and click Select.
NOTE: Any upgrade from vSTAC OS version 5.x to 6.x requires Pivot3 support and will not follow this
documented procedure exactly.
Between version 6.0 and 6.5, vSTAC Manager may require validation of the hypervisor information
before upgrade (See Specific Issues by Upgrade). Verify the vSphere IP address and Administrator
credentials for correctness. Once all IP addresses have been populated correctly, click Next.
vSTAC OS 6.5.x and above provides Online and Offline upgrade options, and vSTAC Manager will disable
choices that are not available. Choose the preferred method and click Start Upgrade.
Online Upgrades will sequentially upgrade each member Appliance within the Protection Group.
Each appliance will be placed into Maintenance Mode, the vSTAC OS or the vSphere hypervisor and
supporting VIBs will be upgraded if required. Client I/O can continue during the upgrade. The level of
client I/O affects the time Online Upgrade requires to complete.
CRITICAL: For data protection purposes, Online Upgrades should not be interrupted once started.
Offline Upgrades typically require less overall time to complete, but is disruptive to client I/O. Offline
Upgrades will result in all members being rebooted. If the hypervisor credentials have been
configured, all guest virtual machines will be shut down as a result of this operation.
NOTE: Offline upgrades result in all data being inaccessible until the upgrade has completed. To prevent
application failure, ensure that all I/O has stopped before beginning upgrade.
Agents
SNMP is configured at the Protection Group level via vSMS. All agents are affected by the Protection Group
settings, so there is no need to do anything independently to each agent. Since Appliances (managed
devices) cooperate within a Protection Group, the agents are capable of providing information and traps for
events on all Appliances within a Protection Group. Protection Group-wide events like Appliance failures can
be reported from the agent in any available Appliance.
To launch the SNMP dialog, select the vSTAC Protection Group in the Logical view of vSTAC Manager and
then select Configuration > SNMP Settings.
When the network-management application uses SNMP for polling, be sure to set up polls for both of the
static IP addresses of each Appliance. Otherwise a failed switch may not be reported.
The Protection Group agents use default SNMP ports. The NMS must allow access to UDP port 162 to
receive traps and UDP port 161 to poll for information. Each of these ports may have to be opened on the
NMS if a firewall is enabled.
Many network-management applications use SMTP to forward traps as email. The network-management
application will push an SMTP message to the corporate email server, which will forward the message as an
email to a specified email address. The default SMTP port is TCP port 25. This port must be exempted from a
firewall at the corporate email server.
Table 12-1: Ports for SNMP network management
Port Transport Used For Must be open at
25 TCP SMTP email Email server receiving SMTP messages
161 UDP SNMP requests vSTAC Management Station
162 UDP SNMP traps vSTAC Management Station
Community Strings
Community Strings are configured through vSMS for a vSTAC Protection Group. There are separate
Community Strings for SNMP Clients and Trap Targets. The common default value for both community
strings is “public.”
The Community String for SNMP Clients is used to allow the agents to be in a group polled by an NMS.
Specifically, this is the Community String for SNMP GETs. SNMP SETs are not supported. The agents will only
answer an NMS retrieval request if the NMS knows the Community String.
The Community String for Trap Targets allows an NMS functioning as a trap receiver to filter for desired
traps. Trap receivers receive all traps sent and it is up to this NMS to filter on the Community String. A trap
receiver may choose to ignore the Community String.
Pivot3 MIB
SNMP uses management information bases (MIBs) to define the variables an SNMP managed system offers.
A Pivot3 MIB has been defined to describe each object identifier used for alerts and status for Pivot3
storage. The Pivot3 MIB is contained within the file PIVOT3SYS-MIB.txt which may be found on the “Pivot3
Software & Documentation Disc” in the SNMP Support folder.
The SNMP MIB hierarchy is a tree with levels defined by different organizations. When importing a MIB file
into a management software application, the MIB details will exist within that hierarchy. The Pivot3 MIB will
be imported into the enterprises class of the MIB tree. MIB-II is also supported.
Shutdown Procedure
The best way to shut down a Protection Group safely is to first ensure all I/O has stopped, and then shut down
each host. This prevents data loss and inaccurate failure status readings.
Application Server VM
Stop all I/O.
Shut down the virtual machine using the operating system method of performing a system shutdown.
Repeat the previous steps on all Application Server VMs.
vSTAC Manager
Launch vSTAC Manager and log in using the proper credentials for the vSTAC Protection Group to be
shut down.
Select the Physical view tab and select the Protection Group to shut down.
Navigate to File > Shutdown. A dialog box will require confirmation.
Click Yes.
Navigate to File > Exit to exit vSTAC Manager.
Verify the VMs have been shut down by connecting to VMware vSphere Client and logging in to all
vSTAC ESXi Hosts in the Protection Group. Verify all relevant VMs are powered off.
help all The command name and a short description for CLI
commands.
help all -v Display all CLI commands in verbose format to the console.
help all -v "c:\CLI Commands\syntax.txt" Save all CLI commands in verbose format to the file c:\CLI
Commands\syntax.txt
The Protection Group is waiting for Power on more Appliances and wait
Not Ready other Appliances to come online or or address process that is holding up
recover from shutdown. the Protection Group.
After vSphere Client has connected, select the IP address in the left pane and press the plus (+) key to
display the virtual machines.
Select the Pivot3_VM and then select the Console tab. Click in the console display area to enable
keyboard input.
Press <Alt> and <F2> at the same time to display the login screen.
At the login prompt, type p3setip and at the password prompt, type pivot3.
NOTE: The login will not register as correct until the Appliance has reached a certain point in the boot
sequence. If the correct credentials are entered and still cause an incorrect login error message, wait
a few moments and repeat Steps 4 and 5 above.
The vSTAC OS IP Setup Utility displays. Use this utility to set IP addresses on all available NICs.
At the ipset> prompt, type nics to determine all available NICS visible to the network. Each of these
NICs will require IP addresses. A typical setup will include san0, san1, and mgmt0.
At the ipset> prompt, type nic san0 to begin the IP address configuration for the first SAN NIC port.
Example:
ipset>nic san0
Next, type ip and the IP address to be assigned to the SAN NIC port. Example:
ipset>ip 10.3.15.145
Next, type netmask followed by the subnet mask value. Example:
ipset>netmask 255.255.255.0
Optionally, to set a gateway, type gateway and the gateway IP address. Example:
ipset>gateway 10.3.15.1
NOTE: Do not set default gateway on SAN or Management NICs.
Repeat Steps 6 – 11 to set the static IP address configuration for the second SAN NIC port, replacing 0
with 1 in Step 8.
Repeat Steps 6 – 11 to prepare the IP address configuration for the Management NIC port, replacing
san0 with mgmt0 in Step 8.
If there is a DHCP server on the network, the IP address for the Management NIC can be quickly set
by typing ip dhcp. The Management NIC is the only port that can be set this way.
Type print at the ipset> prompt to review the values entered for all available NIC ports.
If all the information is correct, type set to retain all the information input in previous steps.
NOTE: The setip function cannot be performed on Appliances in Protection Groups. The IP address can be
modified later through vSTAC Manager as explained in Section 5.
In vSphere, select the appliance hosting ESXi and navigate to Configuration > Networking.
Click Properties… for the SAN Network 0 vSwitch named vSwitch 1.
Check the Enabled box next to Management Traffic and click OK.
To verify that the previous step has registered properly, close and relaunch the SAN Network 0 vSwitch
properties dialog. The SAN iSCSI VMK 0 now shows Management Traffic as “Enabled.”