CloudVision Integration With NSX

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Search

• Home
• Services
• Blog
◦ Back
◦ Bottom Line
◦ Chalkboard
◦ Tool Box
◦ Under The Hood
◦ Series
◾ Back
◾ PKS
• Contact Us

[email protected]

• EN
• ES
• PT

Search
Clear

• HOME
• SERVICES
• BLOG
◦ Bottom Line
◦ Chalkboard
◦ Tool Box
◦ Under The Hood
◦ Series
◾ PKS
• CONTACT US

Search
Clear

• HOME
• SERVICES
• BLOG
◦ Bottom Line
◦ Chalkboard
◦ Tool Box
◦ Under The Hood
◦ Series
◾ PKS
• CONTACT US

HomeUnder The HoodCloudVision and NSX Integration


CloudVision and NSX Integration
December 11, 2018
Under The Hood

What is CloudVision and what does it do for my data center?

Today let’s talk about Software Defined Data Centers and how we can make our lives a little easier by automation, specifically how we can tie our underlay and overlay together.

Our software-defined data centers today are typically comprised of two main areas. Our “underlay”, which is typically deployed in a spine and leaf layout, then our “overlay”, which in this case
will be VMware’s NSX platform. In a lot of cases, people try to treat these as two separate entities, which we believe is a pitfall. By having the ability to tightly integrate the underlay and overlay
together, it allows us to have end-to-end control and visibility into our datacenter environment, as well as many other benefits.

CloudVision can be thought of as an underlay automation component and is delivered by Arista. CloudVision allows for a single control point for our entire underlay environment. This provides
end-to-end control, automated provisioning, centralized management, real-time event correlation and real-time visibility of our underlay network, as well as providing a simple integration into our
overlay environment.

CloudVision is made up of two primary “components”: CloudVision portal (CVP) and CloudVision Exchange (CVX). CVP provides a GUI based method to control our physical Arista switches.
CVX, at least in this topic, will handle the back end VXLAN provisioning.

Integration of CloudVision into NSX

One of the goals of integrating CloudVision into NSX is to provide management of hardware gateways, otherwise known as a hardware VTEP (VXLAN Tunnel End-Point). This hardware gateway
becomes an NSX gateway for VXLAN traffic moving between the virtual and physical workloads. One of the typical use-cases is using a pair of hardware VTEPs (ToR Switches) to provide an
active-active top-of-rack deployment. From the NSX side, it sees the two hardware VTEPs as one logical unit, thus providing redundancy and performance benefits.

Below in Figure 1, you can see an example layout for the software layer 2 gateway that NSX provides.

Figure 1

In Figure 2, you see the integration of third-party components to achieve layer-2 gateway functionality in hardware. This provides the VXLAN to VLAN communications, as well as providing a
high port density and low latency method to connect the physical to the virtual world.
Figure 2

Another key function that occurs between NSX and CloudVision is the communication between the NSX controller and hardware gateway. The two use the OVSDB protocol (TCP port 6640) to
exchange information. The NSX Controller is the client, and the HSC (Hardware Switch Controller) is the server side. In the case with NSX and CloudVision, CloudVision is the HSC and acts as a
single point of integration to NSX for all of the Arista gateways.

VXLAN Control Service

Since we will be using VXLAN, we will need something to synchronize this information across our infrastructure. CloudVision uses the VCS, or VXLAN Control Service to aggregate network-
wide VXLAN state for integration and orchestration with NSX. CloudVision Exchange (CVX) handles this part. The basics of getting this setup and running in your environment would be as
follows.

1. Enable VXLAN Control Services (VSC) on the ToR switches and on CloudVision CVX

1. Enable HSC services on CloudVision CVX.

1. Configure NSX

Let’s take a look at the commands needed on CVX and the ToR switches to accomplish the task above.

The first thing we would do is to tell the ToR switches that CloudVision will manage the VXLAN Control Service.

     ToR(config)#management cvx
     ToR(config-mgmt–cvx)#server host 10.10.10.1   –> This is telling the switch what the management IP off CloudVision uses to listen for client connections. 

     ToR(config-mgmt–cvx)#no shutdown 

Next, we will need to tell the ToR switches that they are a client and that they should sync their VXLAN tables with CloudVision. To do this we use the vxlan–contoller-client command under
interface vxlan 1 as shown below.

     ToR(config)#Interface vxlan 1

     ToR(config-if-Vx1)#vxlan controller-client 

We will then need to configure CloudVision CVX to be the VXLAN controller. To do this we use the following commands.

     cvx(config)#cvx

     cvx(config-cvx)#no shutdown

     cvx(config-cvx)#service vxlan

     cvx(config-cvx)#no shutdown

At this point, we have configured CloudVision Exchange to speak and interact with the ToR switches. Now we will need to enable HSC on CloudVision CVX. We use the following commands to
accomplish this.

     cvx(config)#cvx

     cvx(config-cvx)#service hsc

     cvx(config-cvx–hsc)#no shutdown

Lastly, we will need to tell CloudVision Exchange where the NSX controller lives at. This is needed to establish the OVSDB communication between CloudVision and the NSX controller.
     cvx(config)#cvx

     cvx(config-cvx)#service hsc

     cvx(config-cvx–nsx)#manager 10.114.221.234 [Symbol] IP address of the NSX Controller.

Note with some installations you may need to change the default port to the port the OVSDB protocol uses (6640) as shown below.

    cvx(config-cvx–nsx)#manager 10.114.221.234  6640

Note: There are typically redundant NSX Controllers. Only one of them needs to be entered. The others will be automatically discovered.

At this point, we have done everything we need to do on the ToR and CloudVision side, with one exception. We want to grab the HSC certificate from CloudVision CVX. Below we see the
command we need to enter to view the certificate, so we can copy and paste it into NSX.

     cvx#show hsc certificate –> copy the output for the certificate so we can paste it into NSX in the next few steps.

Now let’s move over the NSX side and get this guy set up.

Once we are inside the Networking and Security area, click on Service Definitions, then Hardware Device. Click on the green plus sign to add a new hardware device. Next, give the new hardware
device a name, description and lastly paste the certificate you copied over from the output of the show hsc certificate command earlier. You should see something similar to what is below in Figure
3.

Figure 3

Note: BFD is enabled by default. VMware will only support configurations running BFD

Now we will select the nodes that we would like to participate in the replication cluster. This is enabled from the same screen above (Service Definitions->Hardware Devices). Select the host you
want in the replication cluster from the left pane, then click the arrow facing to the right to add them as shown in Figure 4 below.
Figure 4

At this point, NSX and CloudVision have been mapped together and we can map a LS (Logical Switch) to any physical port/VLAN advertised by this gateway as shown below in Figure 5 and
Figure 6.

Figure 5

Figure 6

That is the basics of integrating CloudVision with VMware NSX. Hope you enjoyed the quick rundown!

AutomationCloudVisionNSXOverlayUnderlayVXLAN
SHARE
PREV
Troubleshooting Cheat Sheets

ALL
NEXT

What is Cloud Computing?

SEARCH
Search

CATEGORIES
• Bottom Line (21)
• Chalkboard (12)
• Education (1)
• PKS (15)
• Series (10)
• Tool Box (9)
• Under The Hood (10)

RECENT POSTS CAROUSEL

July 2, 2019

Network Time Protocol

June 27, 2019

Istio and Service Discovery in Kubernetes

June 4, 2019

NSX Architecture By Plane

SUBSCRIBE TO OUR NEWSLETTER


Name

Email*

SUBSCRIBE

You might also like