VSP Install - Book
VSP Install - Book
VSP Install - Book
R2
Installation Guide
Table of Contents
About This Guide...................................................................................................................................... 6
Audience ................................................................................................................................................. 6
Table of Contents
Table of Contents
Audience
This manual is intended for system administrators who are responsible for installing and
configuring the HP DCN software.
HP DCN Overview
HP DCN Infrastructure Requirements and Recommendations
Data Center IP Network
NTP Infrastructure
Domain Name System
Certificate Authority
HP DCN Installation Overview
HP DCN Overview
HP DCN is a Software-Defined Networking (SDN) solution that enhances data center (DC)
network virtualization by automatically establishing connectivity between compute resources
upon their creation. Leveraging programmable business logic and a powerful policy engine,
HP DCN provides an open and highly responsive solution that scales to meet the stringent
needs of massive multi-tenant DCs. HP DCN is a software solution that can be deployed over
an existing DC IP network fabric. Figure 1 illustrates the logical architecture of the HP DCN
solution.
Figure1:HPDCNArchitectureandComponents
HP DCN Overview
There are three main components in the HP DCN solution: HP Virtualized Services Directory
(HP VSD), HP Virtualized Services Controller (HP VSC) and HP Virtual Routing and Switching
(HP VRS).
HP Virtualized Services Directory
HP VSD is a programmable policy and analytics engine that provides a flexible and
hierarchical network policy framework that enables IT administrators to define and enforce
resource policies.
HP VSD contains a multi-tenant service directory which supports role-based administration of
users, computers, and network resources. It also manages network resource assignments such
as IP and MAC addresses.
HP VSD enables the definition of sophisticated statistics rules such as:
collection frequencies
rolling averages and samples
threshold crossing alerts (TCAs).
When a TCA occurs it will trigger an event that can be exported to external systems
through a generic messaging bus.
Statistics are aggregated over hours, days and months and stored in a Hadoop analytics
cluster to facilitate data mining and performance reporting.
HP VSD is composed of many components and modules, but all required components can run
on a single Linux server or in a single Linux virtual machine. Redundancy requires multiple
servers or VMs.
To get a license key to activate your HP VSD, contact your HP Sales Representative.
HP Virtualized Services Controller
HP VSC functions as the robust network control plane for DCs, maintaining a full view of pertenant network and service topologies. Through the HP VSC, virtual routing and switching
constructs are established to program the network forwarding plane, HP VRS, using the
OpenFlow protocol.
The HP VSC communicates with the VSD policy engine using Extensible Messaging and
Presence Protocol (XMPP). An ejabberd XMPP server/cluster is used to distribute messages
between the HP VSD and HP VSC entities.
Multiple HP VSC instances can be federated within and across DCs by leveraging MP-BGP.
The HP VSC is based on HP DCN Operating System (DCNOS) and runs in a virtual machine
environment.
HP Virtual Routing and Switching
HP VRS is an enhanced Open vSwitch (OVS) implementation that constitutes the network
forwarding plane. It encapsulates and de-encapsulates user traffic, enforcing L2-L4 traffic
policies as defined by the HP VSD. The HP VRS tracks VM creation, migration and deletion
events in order to dynamically adjust network connectivity.
HP VRS-G
For low volume deployments the software based HP VRS Gateway (VRS-G) module
incorporates bare metal as virtualized extensions to the datacenter.
NTP Infrastructure
Because HP VSP is a distributed system, it is important that the different elements have a
reliable reference clock to ensure the messages exchanged between the elements have
meaningful timestamps. HP VSP relies on each of the elements having clocks synchronized with
NTP.
The HP VSD and HP VRS applications rely on the NTP facilities provided by the host operating
system. The HP VSC, which is based on HP DCN OS, has an NTP client.
HP recommends having at least three NTP reference clocks configured for each system.
Certificate Authority
The northbound ReST API on HP VSD is accessed within an SSL session. The HP VSD is able to
use a self-signed certificate, but having a certificate from a certificate authority will enable
client applications to avoid processing security warnings about unrecognized certificate
authorities.
Figure2:InstallationSetup
Figure 2 diagrams the installation of the HP VSP components and shows how they
communicate with each other. The labeled interfaces are referenced in the installation
instructions. The diagram could be used to map out the topology you plan to use for your own
installation.
The recommended order for installing the software is the order presented in this guide because
each newly installed software item component provides the infrastructure to communicate with
the next component on the list.
After installing HP DCN, configure policies in the HP VSD to derive full benefit from the system.
10
Installation Types
There are two types of installation, standalone and high availability.
High Availability
HP VSD High Availability is intended to guard against single-failure scenarios. High
availability for HP VSD is implemented as a 3 + 1 node cluster as shown in Figure 3.
For high availability of the HP VSD nodes, it is necessary to ensure each VSD node has
redundant network and power, so that no single failure can cause loss of connectivity to more
than one HP VSD node. Therefore, each HP VSD node should be installed on a different
hypervisor.
EachHPVSDinstanceandNameNoderequiresanindividualnetworkinterface.Allnodesmust
beIPreachable.
11
Figure3:HPVSD3+1HACluster
The cluster consists of three HP VSD nodes and one statistics master node (Name node). In
addition, a Load Balancer (not supplied) is optional to load balance across the HP VSD nodes
for the REST API.
Installation Methods
The standard method of installation of HP VSD uses the pre-installed appliance. This appliance
is distributed in four formats.
a ready-to-use QCow2 VM image for KVM hypervisor deployment (see HP VSD Installation
Using QCow2 Image)
12
ISO
3. Copy the HP VSD qcow2 image to the KVM hypervisor image location <TTY>/var/lib/
libvirt/images/ on each hypervisor.
4. Create appliance VMs.
In the example below, a VM is created for each of four HP VSD nodes. If you are doing a
standalone installation, create only myh1.
13
14
Configure Networking
1. Do not use DHCP. Use static IP instead. To do this, modify the file
/etc/sysconfig/network-scripts/ifcfg-eth0 to use your static IP and gateway,
replacing BOOTPROTO value dhcp with static.
BOOTPROTO="static"
IPADDR=192.168.10.101
GATEWAY=192.168.100.1
NETMASK=255.255.255.0
Note: If the Service Records (SRV) for the XMPP cluster are not in the Domain Name
Server (DNS), the script will generate them. An administrator must then load them
into the DNS server. The XMPP cluster name is typically xmpp host in the domain,
15
for example, xmpp.example.com. To use a different host name run the install.sh
with the -x option.
TheDNSserverinthisexampleis10.10.10.100.
TestDNSandreverseDNSfromeachVSDnode(VM).
1. Set up the fully qualified names for the nodes in the DNS server forward named file as per
the following example:
myh1.myd.example.com. 604800 IN A
myh2.myd.example.com. 604800 IN A
myh3.myd.example.com. 604800 IN A
myname.myd.example.com. 604800 IN
192.168.10.101
192.168.10.102
192.168.10.103
A 192.168.10.104
The installation script verifies the DNS forward named file records.
2. From the HP VSD node, verify the SRV record as follows:
server# dig +noall +an @10.10.10.100 SRV _xmpp-client._tcp.xmpp.example.com
_xmpp-client._tcp.xmpp.example.com. 604800
IN SRV
10 0 5222 myh1.myd.example.com.
_xmpp-client._tcp.xmpp.example.com. 604800
IN SRV
10 0 5222 myh2.myd.example.com.
_xmpp-client._tcp.xmpp.example.com. 604800
IN SRV
10 0 5222 myh3.myd.example.com.
_xmpp-client._tcp.xmpp.example.com. 604800
IN SRV
10 0 5222 myname.myd.example.com.
3. Set up the fully qualified names for the nodes in the DNS server reverse named file as per
the following example:
vsd#
vsd#
vsd#
vsd#
dig
dig
dig
dig
+noall
+noall
+noall
+noall
+an
+an
+an
+an
@10.10.10.100
@10.10.10.100
@10.10.10.100
@10.10.10.100
-x
-x
-x
-x
192.168.10.101
192.168.10.102
192.168.10.103
192.168.10.104
604800
604800
604800
604800
IN
IN
IN
IN
PTR
PTR
PTR
PTR
myh1.myd.example.com.
myh2.myd.example.com.
myh3.myd.example.com.
myname.myd.example.com.
16
Note: HP VSD consists of several components and providing high availability for each of
these components can be quite complex. It is imperative that the installation and
powering-on of each node be done in the order specified here.
1. Install HP VSD on Node 1.
The install script checks for the XMPP proxy entry in your DNS. Run <TTY>/opt/vsd/
install.sh -x xmpp.myd.example.com, substituting your own XMPP server name.
[root@myh1 ~]# <TTY>/opt/vsd/install.sh -x xmpp.myd.example.com
----------------------------------------------------| V I R T U A L S E R V I C E S D I R E C T O R Y |
| (c) 2014 HP Networks
|
----------------------------------------------------VSD supports two configurations:
1) HA, consisting of 2 redundant installs of VSD with an optional statistics
server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s): r
Is this install the first (1), second (2), third (3) or cluster name node (t)
[1|2|3|t]: 1
Please enter the fully qualified domain name (fqdn) for this node:
myh1.myd.example.com
Install VSD on the 1st HA node myh1.myd.example.com ...
What is the fully qualified domain name for the 2nd node of VSD:
myh2.myd.example.com
What is the fully qualified domain name for the 3rd node of VSD:
myh3.myd.example.com
What is the fully qualified domain name for the cluster name node of VSD:
myname.myd.example.com
What is the fully qualified domain name for the load balancer (if any)
(default=none):
Node 1:
myh1.myd.example.com
Node 2:
myh2.myd.example.com
Node 3:
myh3.myd.example.com
Name Node:
myname.myd.example.com
XMPP:
xmpp.myd.example.com
Continue [y|n]? (default=y): y
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
Please install VSD on myh2.myd.example.com to complete the installation.
17
myh1.myd.example.com
Install VSD on the 2nd HA node myh2.myd.example.com ...
Node 2:
myh2.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
18
1. Bring up a VM named myh1 using 24 GB RAM and 6 logical cores with the following
commands:
# vsd_name=myh1
# vsd_disk=<TTY>/var/lib/libvirt/images/[xxx].qcow2
# virt-install --connect qemu:///system -n $vsd_name -r 24576 --os-type=linux \
--os-variant=rhel6 \
--disk path=$vsd_disk,device=disk,bus=virtio,format=qcow2 \
--vcpus=6 --graphics vnc,listen=0.0.0.0 --noautoconsole --import
2. Repeat this step for each additional hypervisor, naming the additional vsd instances myh2,
myh3, and myname.
Note: Ensure that the ISO is mounted to the same location on each node.
myh1.myd.example.com
Install VSD on the 1st HA node myh1.myd.example.com ...
What is the fully qualified domain name for the 2nd node of VSD:
myh2.myd.example.com
What is the fully qualified domain name for the 3rd node of VSD:
myh3.myd.example.com
What is the fully qualified domain name for the cluster name node of VSD:
myname.myd.example.com
What is the fully qualified domain name for the load balancer (if any)
19
(default=none):
Node 1:
myh1.myd.example.com
Node 2:
myh2.myd.example.com
Node 3:
myh3.myd.example.com
Name Node:
myname.myd.example.com
XMPP:
xmpp.myd.example.com
Continue [y|n]? (default=y): y
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
Please install VSD on myh2.myd.example.com to complete the installation.
myh1.myd.example.com
Install VSD on the 2nd HA node myh2.myd.example.com ...
Node 2:
myh2.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
20
Select an option and generate or import the certificate to Node 1. If you are running HA VSD,
import it to Nodes 2 and 3 as well.
LDAP Store
If you are using an LDAP store, see Using an LDAP Store.
21
The Linux server is a clean installation with a minimum of configuration and applications.
22
The user has a means of copying the HP VSC software files to the server.
Two independent network interfaces for management and data traffic, connected to two
Linux Bridge interfaces.
Once these requirements have been met, install the required dependencies (the following lines
refer to RHEL; substitute the appropriate Ubuntu references):
yum install kvm libvirt bridge-utils
When you set up a server, you must set up an NTP server for all the components. When you
define a VM, it gets a timestamp which cannot deviate more than 10 seconds.
Note: Intel Extended Page Tables (EPT) must be disabled in the KVM kernel module.
If EPT is enabled, it can be disabled by updating modprobe.d and reloading the kernel module
with:
echo "options kvm_intel ept=0" > /etc/modprobe.d/HP_kvm_intel.conf
rmmod kvm_intel
rmmod kvm
modprobe kvm
modprobe kvm_intel
These instructions assume bridges br0 for management and br1 for data have been created
and attached.
1. Start libvirtd and ensure it is set to start automatically.
Prerequisite: Make sure that libvirt and the bridge packages are installed.
For example, with Ubuntu: install kvm libvirt -bin bridge-utils
service libvirtd start
chkconfig libvirtd on
23
3. Enter:
cp vsc*disk.qcow2 /var/lib/libvirt/images/
chown qemu:qemu /var/lib/libvirt/images/*.qcow2
For Ubuntu:
4. (Optional) Modify the HP VSC XML configuration to rename the VM or the disk files.
5. Define VM:
virsh define vsc.xml
6. Configure VM to autostart:
virsh autostart vsc
Single disk configuration requires one QEMU emulated disk in the qcow2 format
(vsc_singledisk.qcow2) configured as IDE 0/1 (bus 0, master). This emulated disk is
accessible within the HP VSC as device CF1:
Two disk configuration requires two QEMU emulated disks in the qcow2 format:
IDE 0/1 (bus 0, master) must be configured as the user disk. The HP VSC
configuration, logs and other user data reside on this disk. This emulated disk is
accessible within the HP VSC as device CF1:. A minimum of 1GB is recommended
for this disk (a reference user disk is provided).
IDE 0/2 (bus 0, slave) must be configured as the image disk. This disk contains HP
VSC binaries and a default boot options file. This emulated disk is accessible within the
HP VSC as device CF2:. The user should treat this disk as read only and essentially
dedicated to use by the image file. After the user customizes the boot options file, the
modified file should be stored on the user disk CF1:.
24
25
5. In the Deploy OVF Template window that appears, click Browse and select the source
location of the OVF file, and thenclickNext.
26
6. Specify a name and location for the deployed template, and then clickNext.:
7. Select a resource pool within which to deploy the template, and thenclickNext.
27
8. Select the format in which to store the virtual disks, and thenclickNext.
9. Map the networks used in this OVF template to networks in your inventory (select the port
groups), and thenclickNext.
28
Note: Note that you must enter the control IP addresses of the HP VSC peers in the BGP
peer fields.
29
HP VSC
Update HP VSC configuration and reboot
Update HP VSC configuration
If you do not make a choice within 20 seconds, the first optionHP VSC is automatically
selected and the VM boots from the vApp properties that you gave initially.
To boot up the VSC VM implementing the new information, use the second optionUpdate
HP VSC configuration and reboot.
To make changes inside the VM before booting SROS, use the third optionUpdate HP
VSC configuration. Instructions for making such changes are beyond the scope of this
document. Do not make such changes unless you know what you are doing.
30
DefaultValue
DescriptionandNotes
primary-image
cf2:/timos/cpm.tim
Theimagefilefromwhichthesystemwill
attempttoboot.
primary-config
cf1:/config.cfg
Theprimary(first)configurationfilethesystem
willattempttoloadonboot.Thereareaddi
tionalparametersforsecondaryandtertiary
configurationfilesshouldthesystembeunable
tolocatethespecifiedconfiguration.
address
nodefault
TheIPaddressoftheManagementIPinterface
(alsocalledtheoutofbandinterfaceinthe
HPVSCasthisisnormallyonthedatacenters
managementnetwork).
primary-dns
nodefault
TheIPaddressesoftheprimary,secondaryand
tertiaryDNSserversthattheHPVSCwillrefer
enceforDNSnameresolution.
dns-domain
nodefault
TheDNSdomainoftheHPVSC.
static-route
nodefault
Configuresastaticrouteforsubnetsreachable
throughtheManagementIPinterface.
wait
3seconds
Configuresapauseinsecondsatthestartof
thebootprocesswhichallowssysteminitializa
tiontobeinterruptedattheconsole.
Whensysteminitializationisinterrupted,the
operatorisallowedtomanuallyoverridethe
parametersdefinedintheBOF.
secondary-dns
tertiary-dns
31
Table5:BOFParameters,DefaultsandDescriptions(Continued)
Parameter
DefaultValue
DescriptionandNotes
persist
off
Specifieswhetherthesystemwillcreateaper
sistencyfile(.ndx)whichwillpreservesystem
indexes(forexample,theIPinterfaceMIB
objectindex)acrossasystemreboot.This
parameteristypicallyturnedonwhentheHP
VSCismanagedwithSNMP.
ip-address-dhcp
nodefault
Thisoptionalparametershouldbeconfigured
intheHPVSCbof.cfgtotriggerDHCPresolution
atbootup.Whenthiscommandispresent,if
anaddressispresentitwillbeignoredon
reboot,andtheHPVSCwillobtainitsmanage
mentIPviaaDHCPexchange(assumingthere
isaproperlyconfiguredDHCPserveronthe
network).
The following procedure updates the BOF and save the updated bof.cfg file on the user disk
CF1:.
Note: The image disk CF2: has a default bof.cfg file, but any user modified bof.cfg
should be stored on the user disk CF1.
This installation procedure assumes:
1. The HP VSC software has been successfully installed.
2. The user is at the HP VSC console and waiting to log in for the first time.
The information that is configured in the BOF is the following:
32
*A:VSC-1>bof#
The management IP address is configured using the address command which has a syntax
of:
[no]addressipprefix/ipprefixlength[active|standby]
where keywords are in bold, parameters are in italics and optional elements are enclosed
in square brackets. [ ]. Typically, the no form of the command will remove the configured
parameter or return it to its default value.
The primary, secondary and tertiary DNS servers are configured to 10.0.0.1, 10.0.0.2 and
10.0.0.3, respectively, with the following commands:
*A:VSC-1>bof# primary-dns 10.0.0.1
*A:VSC-1>bof# secondary-dns 10.0.0.2
*A:VSC-1>bof# tertiary-dns 10.0.0.3
33
[no]staticrouteipprefix/ipprefixlengthnexthopipaddress
To check connectivity:
ping router management <Gateway IP>
Note: The image disk CF2: has a default bof.cfg file, but any user modified bof.cfg
should be stored on the user disk CF1:.
The command to save the BOF to cf1: is:
*A:VSC-1>bof# save cf1:
The exit command returns the CLI to the root context so that the admin reboot command
can be issued to reboot the system. Answer in the affirmative to reboot.
34
After rebooting, the IP management interface for the HP VSC is configured along with
DNS.
Creating the in-band IP interface and assigning an IP address (interface name control and
IP address 10.9.0.7/24 in the example configuration below).
Creating a system loopback IP interface for use by network protocols (interface name
system and IP address 10.0.0.7/32 in the example configuration below).
System Name
The config>system>name command is used to configure the system name. In the excerpt below,
the system name is set to NSC-vPE-1.
#-------------------------------------------------echo "System Configuration"
#-------------------------------------------------exit all
configure
system
name "NSC-vPE-1"
35
snmp
shutdown
exit
exit all
36
37
BGP needs to be configured if there are multiple HP VSCs that will be operating as a
federation. The following is just a sample configuration and should be adapted according to
the existing BGP infrastructure (for example, the use of Route Reflectors, the bgp group
neighbor IP addresses and family types should be specified, etc.).
38
39
UDP/TCP
Required/
Optional
ProtocolNotes
21/22
TCP
Optional
FTP
22
TCP
Optional
SSH
23
TCP
Optional
Telnet
123
UDP
Required
NTP
161/162
UDP
Optional
SNMPrequiredforSNMPmanagement
179
TCP
Required
BGPrequiredforfederatedHPVSCs
6633
TCP
Required
OpenFlow
49152
65535
UDP
Optional
RADIUSforconsoleuserauthentication
dynamicallyreservesportsinthisrange
uponinitializationoftheHPVSCforout
goingconnectionsandtheresulting
response.Theportsusedinthisrange
canbeviewedwithshowsystemcon
nections.
IfRADIUSnotused,noincomingpackets
willbeforwardedorprocessed.
Table7:HPVSCUDP/TCPOutbound/RemotePorts
Port
40
UDP/TCP
Required/
Optional
ProtocolNotes
21/22
TCP
Optional
FTP
22
TCP
Optional
SSH
23
TCP
Optional
Telnet
49
TCP
Optional
TACACS+
53
UDP/TCP
Required
DNS
Table7:HPVSCUDP/TCPOutbound/RemotePorts(Continued)
Port
UDP/TCP
Required/
Optional
ProtocolNotes
69
UDP
Optional
TFTP
123
UDP
Required
NTP
161/162
UDP
Optional
SNMPrequiredforSNMPmanagement
179
TCP
Required
BGPrequiredforfederatedHPVSCs
514
UDP
Optional
Syslog
6633
TCP
Required
OpenFlow
41
42
The Linux server must be a clean installation with a minimum of configuration and
applications.
The VRS software files must have been copied to the server.
VRS on RHEL
VRS on Ubuntu 12.04 LTS with Ubuntu 12.04 Cloud Packages
VRS-G on RHEL or Ubuntu 12.04
Installing the VRS Kernel Module for MPLS over GRE
Installing the VRS Kernel Module for MPLS over GRE
Installing VRS Kernel Module On RHEL
Installing VRS Kernel Module On Ubuntu 12.04
Note: For the currently supported software versions and hardware, consult the release
notes for the current version of HP DCN.
VRS on RHEL
The HP VRS .tar.gz file contains the additional HP-specific packages. Install them following the
process below.
Note: VRSmustbeinstalledfromlocallydownloadedRPMfilesunlessithasbeenadded
toacustomrepository(whichisbeyondthescopeofthisdocument).
Note: SinceCentOS6isacommunityeditionofEnterpriseLinuxwhichisbinary
compatiblewithRHEL,VRSshouldalsoworkonCentOS6.
1. Update your system:
yum update
43
OK
OK
OK
OK
OK
OK
OK
OK
Starting ovsdb-server
OK
OK
OK
Starting ovs-vswitchd
OK
OK
OK
OK
OK
7. If you did not modify /etc/default/openvswitch, restart and verify that the VRS
processes restarted correctly:
# service openvswitch restart
Stopping HP monitor:Killing HPMon (6912)
44
OK
OK
OK
OK
OK
OK
OK
OK
OK
Starting ovsdb-server
OK
OK
OK
Starting ovs-vswitchd
OK
Starting ovs-brcompatd
OK
OK
OK
]
[
Note: VRSissupportedontheUbuntu12.04PreciseLongTermSupportoperating
system,withadditionalpackagescomingfromtheUbuntu12.04Cloudrepository.
Note: The supported kernel version corresponds to the Trusty hardware enablement stack.
Any new install of Ubuntu 12.04 will contain this kernel. (For more information, see
https://wiki.ubuntu.com/Kernel/LTSEnablementStack.)
Note: VRS must be installed from locally downloaded .deb files unless it has been added
to a custom repository (which is beyond the scope of this document).
1. Enable the Ubuntu 12.04 cloud repository:
sudo add-apt-repository cloud-archive:grizzly
Note: Moredetailsonthecloudrepositoriescanbefoundonhttps://wiki.ubuntu.com/
ServerTeam/CloudArchive.
2. Update your system:
sudo apt-get update
sudo apt-get upgrade
4. If you do not have the correct kernel, activate the Trusty hardware enablement kernel:
sudo apt-get install --install-recommends linux-generic-lts-trusty
5. Reboot:
reboot
6. Install dependencies:
apt-get install qemu-kvm libvirt-bin libjson-perl python-twisted-core vlan
45
hp-openvswitch-common
hp-openvswitch-switch
hp-python-openvswitch
VRS on RHEL.
VRS on Ubuntu 12.04 LTS with Ubuntu 12.04 Cloud Packages
2. Edit /etc/default/openvswitch-switch by setting PERSONALITY=vrs-g.
3. Restart the VRS service:
service openvswitch restart
46
Note: IftheEPELrepositoryinstallfails,checkhttps://fedoraproject.org/wiki/EPELforthe
latestepelreleasepackageversionandlocation.
3. Install dependencies for DKMS:
yum install dkms
yum install kernel-devel
Note: VRSinstallwillfailiftheinstalledversionofkerneldevelisnotthesameasthe
currentlyrunningkernel.
5. Verify that the installed version of kernel-devel is the same as the currently running
kernel:
Note: Ifyouareunabletousethelatestkernel,installkerneldevelpackagesforyour
currentlyrunningkernel:yum install kernel-devel-`uname -r`
6. Do a yum localinstall of the hp-openvswitch-dkms package.
7. Verify that the VRS processes restarted correctly:
# service openvswitch restart
Stopping hp monitor:Killing hpMon (6912)
OK
OK
OK
OK
OK
OK
OK
OK
OK
Starting ovsdb-server
OK
OK
47
OK
Starting ovs-vswitchd
OK
OK
OK
Starting ovs-brcompatd
Starting hp monitor:Starting hpMon
Starting vm-monitor:Starting vm-monitor
OK
]
[
Note: Customizationscriptsmustbererunaftereveryreboot.BecauseofthenewISO
image,changesarenotpersistentacrossreboots.
48
Introduction
Prerequisites
Creating the dVSwitch
Verifying the Creation of the dVSwitch
vSphere vSwitch Configurations
Deployment of dVRS
Information Needed
Verifying Deployment
Introduction
This chapter describes the integration of the Virtual Routing and Switching (VRS) VM with
VMware that is required for all VMware deployments with VMware vSphere Hypervisor (ESXi).
The integration requires creating the dVSwitch, configuring vSphere vSwitch, and deploying the
dVSwitch.
Note: Workflow and VSD must be NTP synced. Lack of synchronization could lead to
failure of operations on VSD.
Prerequisites
Procure the following packages:
CloudMgmt-vmware
VRS OVF Templates for VMware
For Multicast to work on ESXi:
VCENTER_USER
Introduction
49
VCENTER_PASSWD
CLUSTER_NAME
From the CloudMgmt-vmware package, run the command cli.bash with the following
arguments, taking account of the note below.
bash# ./cli.bash create_dvswitch --add-hosts
true
--num-portgroups 1
--provider-vdc-id 1
--url https://<VCENTER_IP>/sdk
-u <VCENTER_USER>
-p <VCENTER_PASSWD>
-r <CLUSTER_NAME>
Note: If you are using vCloud, ensure that the value passed to --num-portgroups is not
lower than the maximum number of tenant networks you expect to have on this
cluster/provider VDC.
vSwitch0
vSwitch1
dVswitch
vSwitch0
This is the default management vSwitch.
Note down the name of the 'Virtual Machine Port Group,' for example, "Lab Management."
vSwitch1
Create vSwitch1 on each hypervisor.
We will use this for the data path.
Hypervisor > Configuration > Networking > vSphere Standard vSwitch > Add Networking >
"Connection Types: Virtual Machine."
Add one physical Adapter to the switch (this NIC should connect us to the DC data network)
50
dVswitch
This is the dvSwitch we created in Creating the dVSwitch.
Note down the name of the port group ending with "-OVSPG", for example,
"<CLUSTER_NAME>-OVSPG."
"dataNetworkPortgroup":"DVRS Datapath",
"mgmtNetworkPortgroup":"Lab Management",
"vmNetworkPortgroup":"<CLUSTER_NAME>-OVSPG"
Deployment of dVRS
Note: If you have a small number of hypervisors, you can manually deploy the OVF
Template from the vSphere Client (File > Deploy OVF Template).
Information Needed
Fill in the metadata in the "vrs-metafile.config" file:
vCenter IP
Hypervisor(s) IP, login, pass
IP of to assign to the DVRS VM(s) (Management and Data network: IP, netmask, gateway)
HP controller IPs
Port Groups created in the previous step
Verifying Deployment
DRS Enablement
Verify that the cluster has DRS enabled (Cluster > right click > Edit settings > Cluster
Features > Turn ON vSphere DRS checked).
Deployment of dVRS
51
Deployment of dVRS
Verify that a resource group "HP System Resources" is created on each cluster.
Verify that there is one dVRS VM created for each hypervisor in the cluster.
Additional Verification
Log in to the the DVRS VM (with username/password: root/UFXCr4733F) and execute the
command "ovsvsctlshow."
Verify that DVRS controller connection state is UP.
52
Note: HPVRScannotbeinstalledonthefollowing:
53
Introduction
Block 1
Installation
1. Remove stock openvswitch
Note:rpm -qa | grep openvswitch
Allrpmsmustberemoved:'yum remove'isrecommended.
2. Have ready the hp xen dVRS, which consists of the following rpms:
hp-openvswitch-<version>
hp-openvswitch-modules-xen-2.6.32.43-0.4.1.xs1.8.0.835.170778-<version>
Verification
1. Ensurethatallpackageshavebeeninstalled:
[root@ovs-2 images]# rpm -qa | grep openvswitch
hp-openvswitch-modules-xen-2.6.32.43-0.4.1.xs1.8.0.835.1707782.0-51
hp-openvswitch-2.0-51
2. Ensurethat/etc/sysconfig/openvswitchhascorrectPERSONALITYandPLATFORM:
[root@ovs-2 images]# cat /etc/sysconfig/openvswitch | grep
PERSONALITY
# PERSONALITY: vrs/vrs-g/cpe/none (default: vrs)
PERSONALITY=vrs
[root@ovs-2 images]# cat /etc/sysconfig/openvswitch | grep PLATFORM
# PLATFORM: kvm/xen/esx-i. Only apply when in VRS personality
PLATFORM=xen
3. VerifyHPManagedNetworkiscreated:
[root@acs-ovs-3 ~]# xe
uuid ( RO)
name-label (
name-description (
bridge (
54
network-list name-label=hpManagedNetwork
: 817ece89-4835-980c-a48f-0bf02bc4241a
RW): hpManagedNetwork
RW): hpManagedNetwork
RO): xapi0
Block 2
Installation
Reboot XenServer.
Verification
After the XenServer comes up, in addition to the usual verification such as interface status,
management network connectivity etc., perform the following verification checks:
1. Ensure that the bridge corresponding to HPManagedNetwork does not have any PIF
attached to it.
[root@acs-ovs-3 ~]# ovs-vsctl show
016cccd2-9b63-46e1-85d1-f27eb9cf5e90
~Snip~
Bridge "xapi0"
Controller "ctrl1"
target: "tcp:10.10.14.8:6633"
role: slave
fail_mode: standalone
Port "xapi0"
Interface "xapi0"
type: internal
Bridge "xenbr0"
fail_mode: standalone
Port "eth0"
Interface "eth0"
Port "xenbr0"
Interface "xenbr0"
type: internal
Bridge "xenbr2"
~Snip~
2. Ensure that the hpManagedBridge bridges has the 'HP-managed' flag set:
[root@acs-ovs-3 ~]# ovs-vsctl list br xapi0
_uuid
: 7572d9d6-3f96-43d5-b820-fd865158057e
controller
: [ad89f1f6-fe5f-4e4e-8832-9816176878e8]
datapath_id
: "0000000000000001"
datapath_type
: ""
external_ids
: {}
fail_mode
: standalone
flood_vlans
: []
flow_tables
: {}
mirrors
: []
name
: "xapi0"
netflow
: []
other_config
: {datapath-id="0000000000000001", HPmanaged="true"}
ports
: [8a9ff6ca-13cd-4036-b9e2-ca6b4e912d11]
sflow
: []
status
: {}
stp_enable
: false
55
12972
/var/run/
59425
/var/run/
Note: If you are running a pre-2.1.2 dVRS version and want to upgrade. Please do the
following before upgrade:
1. xe pif-scan host-uuid=<your host uuid>
2. xe-toolstack-restart
3. xe network-list name-label=hpManagedNetwork params=uuid
Gives hpManagedNetwork's UUID = HPNetUUID
4. xe network-params-set uuid $HPNetUUID name-label=" Pool-wide network
associated with ethX"
You should see only one HPManagedNetwork with bridge=xapiX (where X is a whole
number).
56
Block 1
Installation
1. Have ready the HP xen dVRS, which consists of the following rpms:
hp-openvswitch-<version>
hp-openvswitch-modules-xen-2.6.32.43-0.4.1.xs1.8.0.835.170778-<version>
b. rpm -U hp-openvswitch-<version>
Verification
1. Ensurethatallpackagesareinstalled:
[root@ovs-2 images]# rpm -qa | grep openvswitch
hp-openvswitch-modules-xen-2.6.32.43-0.4.1.xs1.8.0.835.1707782.0-51
hp-openvswitch-2.0-51
2. Ensurethat/etc/sysconfig/openvswitchhascorrectPERSONALITYandPLATFORM:
[root@ovs-2 images]# cat /etc/sysconfig/openvswitch | grep
PERSONALITY
# PERSONALITY: vrs/vrs-g/cpe/none (default: vrs)
PERSONALITY=vrs
[root@ovs-2 images]# cat /etc/sysconfig/openvswitch | grep PLATFORM
# PLATFORM: kvm/xen/esx-i. Only apply when in VRS personality
PLATFORM=xen
network-list name-label=hpManagedNetwork
: 817ece89-4835-980c-a48f-0bf02bc4241a
RW): hpManagedNetwork
RW): hpManagedNetwork
RO): xapi0
Block 2
Installation
Reboot XenServer.
Verification
After the XenServer comes up, perform the following verification checks:
1. Ensure that the bridge corresponding to hpManagedNetwork does not have any PIF
attached to it.
57
2. Ensure that the hpManagedBridge bridges has the 'HP-managed' flag set:
[root@acs-ovs-3 ~]# ovs-vsctl list br xapi0
_uuid
: 7572d9d6-3f96-43d5-b820-fd865158057e
controller
: [ad89f1f6-fe5f-4e4e-8832-9816176878e8]
datapath_id
: "0000000000000001"
datapath_type
: ""
external_ids
: {}
fail_mode
: standalone
flood_vlans
: []
flow_tables
: {}
mirrors
: []
name
: "xapi0"
netflow
: []
other_config
: {datapath-id="0000000000000001", HPmanaged="true"}
ports
: [8a9ff6ca-13cd-4036-b9e2-ca6b4e912d11]
sflow
: []
status
: {}
stp_enable
: false
58
grep -i hp
[root@ovs-2 ~]#
12972
/var/run/
59425
/var/run/
Editing the configuration file loaded by the OpenvSwitch script when it starts
Running the CLI command ovsvsctladdcontroller
The preferred method is the first, i.e., editing the configuration file. Specify the controllers by
means of IP addresses in dotted decimal notation (see Specifying the Active and Standby HP
VSCs).
Standby Controller:
59
60
If you have a Care Pack or other support contract, either your Service Agreement Identifier
(SAID) or other proof of purchase of support for the software
How to contact HP
See the Contact HP Worldwide website to obtain contact information for any country:
http://www8.hp.com/us/en/contact-hp/ww-contact-us.html
See the contact information provided on the HP Support Center website: http://
www8.hp.com/us/en/support.html
In the United States, call +1 800 334 5144 to contact HP by telephone. This service is
available 24 hours a day, 7 days a week. For continuous quality improvement,
conversations might be recorded or monitored.
61
For information about licenses for the controller, see the HP VAN SDN Controller
Administrator Guide.
For information about licenses for HP SDN applications, see the information about
licensing in the administrator guide for the application.
Care Packs
To supplement the technical support provided with the purchase of a license, HP offers a wide
variety of Care Packs that provide full technical support at 9x5 or 24x7 availability with annual
or multi-year options. To purchase a Care Pack for an HP SDN application, you must have a
license for that application and a license for the controller.
For a list of Care Packs available for the controller and HP SDN applications, see:
http://www.hp.com/go/cpc
Enter the SDN license product number to see a list of Care Packs offered. Once registered, you
receive a service contract in the mail containing the customer service phone number and your
Service Agreement Identifier (SAID). You need the SAID when you phone for technical support.
To obtain full technical support prior to receiving the service contract in the mail, please call
Technical Support with the proof of purchase of the Care Pack.
http://www8.hp.com/us/en/support.html
This website also provides links for manuals, electronic case submission, and other support
functions.
Warranty
For the software end user license agreement and warranty information for HP Networking
products, see http://www8.hp.com/us/en/drivers.html
Related information
Documentation
HP SDN information library
http://www.hp.com/go/sdn/infolib
Product websites
HP Software-Defined Networking website:
62
Primary website:
http://www.hp.com/go/sdn
Development center:
http://www.sdndevcenter.hp.com
Related information
63
64
8 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
([email protected]). Include the document title and part number, version number, or the
URL when submitting your feedback.
65
The interface names for the management and datapath interfaces on the hypervisor
The IP addresses and network information (including default route) for the management
and datapath interfaces on the hypervisor
The files that will be modified are:
/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/sysconfig/network-scripts/ifcfg-br0
/etc/sysconfig/network-scripts/ifcfg-br1
The procedures are:
Modifytheeth0configuration
Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 to match the information below.
DEVICE="eth0"
BRIDGE="br0"
ONBOOT="yes"
BOOTPROTO="none"
TYPE="Ethernet"
Modifytheeth1configuration
Edit the file /etc/sysconfig/networks-scripts/ifcfg-eth1 to match the information below:
DEVICE="eth1"
BRIDGE="br1"
ONBOOT="yes"
BOOTPROTO="none"
66
TYPE="Ethernet"
Edit(orcreate)thebr0configuration
Edit the file /etc/sysconfig/network-scripts/ifcfg-br0 to match the information
below, replacing the IP address and netmask as appropriate:
DEVICE="br0"
TYPE="Bridge"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR=" 192.0.2.10"
NETMASK="255.255.255.0"
GATEWAY=" 192.0.2.1"
Edit(orcreate)thebr1configuration
Edit the file /etc/sysconfig/network-scripts/ifcfg-br1 to match the information
below, replacing the IP address and netmask as appropriate:
DEVICE="br1"
TYPE="Bridge"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR="198.51.100.10"
NETMASK="255.255.255.0"
67