B DCNM 114 Vxlan Evpn v1
B DCNM 114 Vxlan Evpn v1
B DCNM 114 Vxlan Evpn v1
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2020 Cisco Systems, Inc. All rights reserved.
CONTENTS
CHAPTER 1 About 1
About This Demonstration 1
Requirements 2
About This Solution 2
Use cases 3
Topology 4
Before Presenting 5
Get Started 6
CHAPTER 2 Scenarios 7
Configure Multi-Site 32
Create Fabric 32
Move Fabrics 33
Confirm Connectivity 38
VMM integration 39
Enabling vCenter Compute Visualization 40
Configure Endpoint Locator 47
Network Deployment via REST API (Swagger) 53
DCNM Restful API Documentation 55
Data Center Network Manager 58
CHAPTER 3 Appendix 59
Appendix A. Troubleshooting 59
• Greenfield: This use case shows how to provision new VXLAN EVPN fabrics.
• Import and Deploy VXLAN EVPN fabric on Greenfield
• Import and Deploy Core Fabric
• Configure Multi-Site: A multi-fabric container created to manage multiple member fabrics. It is a
single point of control for a definition of overlay networks and Virtual Routing and Forwarding
(VRF) that are shared across member fabrics.
• VMM integration: The Virtual Machine Manager (VMM) plug-in stores all the computes and the virtual
machine information that connects to the fabric or the switch groups that are loaded into Cisco DCNM.
VMM gathers compute repository information and displays the VMs, VSwitches/DVS, and hosts in the
topology view.
• Configure Endpoint Locator: The Endpoint Locator (EPL) feature allows real-time tracking of endpoints
within a data center. The tracking includes tracing the network life history of an endpoint and getting
insights into the trends associated with endpoint additions, removals, moves, etc. An endpoint is anything
with an IP and MAC address. In that sense, an endpoint can be a virtual machine (VM), container,
bare-metal server, service appliance etc.
• Network Deployment via REST API (Swagger): In addition to provisioning, monitoring, and
troubleshooting the data center network infrastructure, Cisco DCNM provides a comprehensive feature-set
that meets the routing, switching, and storage administration needs of the data center. It streamlines the
provisioning of the Programmable Fabric and the monitoring of the SAN and LAN components. The
Cisco Fabric Automation REST APIs for third party applications enables you to programmatically control
Cisco Fabric Automation. The REST API supports Power On Auto Provisioning (POAP), Auto Config
and, Cable plan features. All the REST API operations can also be performed using the DCNM GUI as
DCNM uses these REST APIs to render the GUI.
Requirements
The table below outlines the requirements for this preconfigured demonstration.
Required Optional
Laptop Cisco AnyConnect
Virtual Extensible LAN (VXLAN) is an overlay technology for network virtualization. It provides a Layer 2
extension over a shared Layer 3 underlay infrastructure network by using MAC address in IP User Datagram
Protocol (MAC in IP/UDP) tunneling encapsulation. The purpose of obtaining a Layer 2 extension in the
overlay network is to overcome the limitations of physical server racks and geographical location boundaries
to achieve flexibility for workload placement within a data center or between different data centers.
The initial IETF VXLAN standards (RFC 7348) defined a multicast-based flood-and-learn VXLAN without
a control plane. It relies on data-based flood-and-learn behavior for remote VXLAN tunnel endpoint (VTEP)
peer discovery and remote end-host learning. The overlay broadcast, unknown unicast, and multicast traffic
is encapsulated into multicast VXLAN packets and transported to remote VTEP switches through the underlay
multicast forwarding. Flooding in such a deployment can present a challenge for the scalability of the solution.
The requirement to enable multicast capabilities in the underlay network also presents a challenge because
some organizations do not want to enable multicast in their data centers or WAN networks.
To overcome the limitations of the flood-and-learn VXLAN as defined in RFC 7348, organizations can use
Multiprotocol Border Gateway Protocol (MP-BGP) Ethernet Virtual Private Network (EVPN) as the control
plane for VXLAN. MP-BGP EVPN has been defined by IETF as the standards-based control plane for VXLAN
overlays. The MP-BGP EVPN control plane provides protocol-based VTEP peer discovery and end-host
reachability information distribution that allows more scalable VXLAN overlay network designs suitable for
private and public clouds. The MP-BGP EVPN control plane introduces a set of features that reduces or
eliminates traffic flooding in the overlay network and enables optimal forwarding for both east-west and
north-south traffic.
The VXLAN EVPN Multi-Site feature is a solution to interconnect two or more BGP-based Ethernet VPN
(EVPN) site's fabrics in a scalable fashion over an IP-only network.
The Border Gateway (BG) is the node that interacts with nodes within a site and with nodes that are external
to the site. For example, in a leaf-spine data center fabric, it can be a leaf, a spine, or a separate device acting
as a gateway to interconnect the sites.
Use cases
VXLAN EVPN Multi-Site architecture is a design for VXLAN BGP EVPN–based overlay networks. It allows
interconnection of multiple distinct VXLAN BGP EVPN fabrics or overlay domains, and it allows new
approaches to fabric scaling, compartmentalization, and DCI.
When you build one large data center fabric per location, various challenges related to operation and failure
containment exist. By building smaller compartments of fabrics, you improve the individual failure and
operation domains. Nevertheless, the complexity of interconnecting these various compartments precludes
the pervasive rollout of such concepts, specifically when Layer 2 and Layer 3 extension is required.
Figure 1: Compartmentalization Example
VXLAN EVPN Multi-Site architecture provides integrated interconnectivity that doesn’t require additional
technology for Layer 2 and Layer 3 extension. It thus offers the possibility of seamless extension between
compartments and fabrics. It also allows you to control what can be extended. In addition to defining which
VLAN or Virtual Routing and Forwarding (VRF) instance is extended, within the Layer 2 extensions you can
also control broadcast, unknown unicast, and multicast (BUM) traffic to limit the ripple effect of a failure in
one data center fabric.
When you build networks using the scale-up model, one device or component typically reaches the scale limit
before the overall network does. The scale-out approach offers an improvement for data center fabrics.
Nevertheless, a single data center fabric also has scale limits, and thus the scale-out approach for a single
large data center fabric exists.
In addition to the option to scale out within a single fabric, with EVPN Multi-Site architecture you can scale
out in the next level of the hierarchy. Similarly, as you add more leaf nodes for capacity within a data center
fabric, in EVPN Multi-Site architecture you can add fabrics (sites) to horizontally scale the overall environment.
With this scale-out approach in EVPN Multi-Site architecture, in addition to increasing the scale, you can
contain the full-mesh adjacencies of VXLAN between the VXLAN tunnel endpoints (VTEPs) in a fabric.
Figure 2: Scale Example
EVPN Multi-Site architecture can also be used for DCI scenarios. As with the compartmentalization and
scale-out within a data center, EVPN Multi-Site architecture was built with DCI in mind. The overall
architecture allows single or multiple sites per data center to be positioned and interconnected with single or
multiple sites in a remote data center. With seamless and controlled Layer 2 and Layer 3 extension through
the use of VXLAN BGP EVPN within and between sites, the capabilities of VXLAN BGP EVPN itself have
been increased. The new functions related to network control, VTEP masking, and BUM traffic enforcement
are only some of the features that help make EVPN Multi-Site architecture the most efficient DCI technology.
Figure 3: Data Center Interconnect Example
Topology
This content includes preconfigured users and components to illustrate the scripted scenarios and features of
the solution. Most components are fully configurable with predefined administrative user accounts. You can
see the IP address and user account credentials to use to access a component by clicking the component icon
in the Topology menu of your active session and in the scenario steps that require their use.
Figure 4: dCloud Topology
Before Presenting
Cisco dCloud strongly recommends that you perform the tasks in this document before presenting in front of
a live audience. This will allow you to become familiar with the structure of the document and content.
Get Started
Follow the steps to schedule a session of the content and configure your presentation environment.
Procedure
Step 2 For best performance, connect to the workstation with Cisco AnyConnect VPN [Show Me How] and the local
RDP client on your laptop [Show Me How].
Workstation 1: 198.18.133.36, Username: dcloud\demouser, Password: C1sco12345
Note When the dCloud demo is first marked as available in the dCloud UI, scripts may continue to run
in the background on the demo Workstation configuring the demo components. This is indicated
by the presence of the Demo Initializing icon on the desktop. Allow these scripts to complete
before using the session.
Alternately, you can use the Search Catalog box to search for the Instant Demo name.
Example:
Step 4 Double-click server-2 then, enter ifconfig and then, note the IP Address.
Example:
Step 5 Double-click server-3 then, enter ifconfig and then, note the IP Address.
Example:
Step 7 Get the addresses for the local server and the remote server.
a) Return to Leaf-2.
b) Enter show bgp l2vpn evpn.
Example:
Create Fabric
Procedure
Step 1 Double-click the Data Center Network Manager desktop shortcut and sign-in with these credentials.
• Username = admin
• Password = C1sco12345
Example:
Step 3 In the Fabric Name field, enter VXLAN-EVPN-Brownfield-Site-2 and then, in the Fabric Template
drop-down, select Easy_Fabric_11_1.
Step 4 In the General tab, in the BGP ASN field, enter 65002.
Example:
Step 7 In the Resources tab. scroll down and change the first quad of the VRF Lite Subnet IP Range field from 10
to 20.
Example:
Step 8 In the Configuration Backup tab, check the Hourly Fabric Backup checkbox and then, click Save.
Example:
Add Switches
Procedure
Note If fabric discovery is not successful for all switches, going back and re-adding the Seed IP and
credentials seems to fix it.
Step 3 Check the select all checkbox and then, click Import into fabric.
Example:
Step 5 Right-click BGW-2 and then, select Set role > Border Gateway.
Example:
Step 6 Right-click Spine-2 and then, select Set role > Spine.
Example:
Another option is to reboot the switch by running the reload command. The switch will take 5
minutes or so to restart, it can be monitored via the MTPuTTY session.
Note If the switches are still blue, refresh the page until they are all green. If the layout is different from
above, drag the switch to match the layout and click Save layout.
Confirm Connectivity
Procedure
Procedure
Step 7 In the Policy drop-down, select int_trunk_host_11_1 then click Save and then, click Deploy.
Example:
Step 9 Verify that the Mode of Leaf-2 and Leaf-3 is now trunk.
a) In the Device Name field, enter Leaf.
b) In the Name field, enter Ethernet1/6.
c) In the Mode field, enter trunk.
Importing VRFs
Procedure
Importing Networks
Procedure
Step 5 Click the multi-select checkbox and the, click-and-drag a rectangle around Leaf-2, and Leaf-3.
Example:
Create Fabric
Procedure
Step 1 Return to Data Center Network Manager > Control > Fabric Builder and then, click Create Fabric.
Step 2 In the Fabric Name field, enter VXLAN-EVPN-Greenfield-Site-1 and then, in the Fabric Template
drop-down, select Easy_Fabric_11_1.
Step 3 In the General tab, in the BGP ASN field, enter 65001.
Example:
Step 6 In the Configuration Backup tab, check the Hourly Fabric Backup checkbox and then, click Save.
Example:
Add Switches
Procedure
Step 3 Click the select all checkbox and then, click Import into fabric.
Example:
Step 6 Right-click BGW-1 and then, select Set role > Border Gateway.
Example:
Step 7 Right-click Spine-1 and then, select Set role > Spine.
Example:
Step 10 Click Side-by-side Comparison to see the difference between the Running and Expected configurations.
Step 11 Close the Preview Config window and then, click Deploy Config.
Step 12 When the deployment completes, click Close.
Step 3 Enter show run to review the configuration that has been pushed via DCNM.
Example:
Step 1 Return to Data Center Network Manager and then, click Control > Interfaces.
Example:
Step 3 In the Policy drop-down, select int_access_host_11_1 then, click Save and then, click Preview.
Step 4 Click Expected Config and then, when done viewing, close the Preview Configuration dialog.
Step 5 Click Deploy and then, click OK.
Example:
Create Fabric
Procedure
Add Switches
Procedure
Step 4 Click Save and Deploy and then, on completion, click the Preview Config link.
Example:
The topology screen shows that Site-1, Site-2, and DCI-Router are deployed.
Configure Multi-Site
In this section we create a multi-site fabric using the multi-site template in DCNM and then, import all 3 fabric
with easy to use DCNM GUI. Ping between servers in two different sites confirms that multi-site is configured
successfully.
Create Fabric
Procedure
Step 1 Return to Data Center Network Manager and then, click Fabric Builder > Create Fabric.
a) In the Fabric Name field, enter MSD-1.
b) In the Fabric Template drop-down, select MSD_Fabric_11_1.
Step 2 Configure DCI.
a) In the DCI tab, in the Multi-Site Overlay IFC Deployment Method drop-down, select
Centralized_To_Route_Server.
b) In the Multi-Site Route Server List field, enter 100.100.100.100
c) In the Multi-Site Route Server BGN ASN List field, enter 65003.
d) Click the Multi-Site Underlay IFC Auto Deployment Flag checkbox and then, click Save.
Example:
Move Fabrics
Procedure
Step 6 Click Save and Deploy and then, on completion, click the Preview Config link for BGW-1.
Example:
Step 7 Click Side-by-side Comparison to see the difference between the configurations and then, when done viewing,
close the Preview Configuration dialog.
Step 8 Click Deploy Config and then, when complete, click Close.
Step 9 Click Control > Networks.
Example:
Step 14 In the Network Attachment – Attach networks for given switch(es) dialog, click the Leaf-1 checkbox.
a) In the Interfaces column, for Leaf-1, click
Step 20 Close the Preview Configuration dialog and then, click Deploy.
When the topology chart items turn green, the configuration is deployed.
Confirm Connectivity
Procedure
VMM integration
In virtualized environments, any kind of troubleshooting starts with identifying the network attachment point
for the virtual machines. This means that a quick determination of the server, virtual switch, port group,
VLAN, associated network switch, and physical port is critical. This requires multiple touch points and
interactions between the server and the network administrator as well as reference to multiple tools (compute
orchestrator, compute manager, network manager, network controller, etc.).
In this scenario we will configure VMM integration within DCNM.
This allows you to visualize the vCenter-managed hosts and their leaf switch connections on the Topology
window. The visualization options include viewing only the attached physical hosts, only the VMs, or both.
When you select both, the topology all the way from the leaf switches to the VMs, including the virtual
switches are displayed.
Cisco DCNM Supports hosts running on UCS type B (chassis UCS) that are behind the Fabric interconnect.
You must enable CDP of the vNIC on Cisco UCSM to use this feature.
Procedure
Procedure
The Control > Management > Virtual Machine Manager window displays.
Step 2 Click the + icon to add a new VMware vSphere vCenter.
a) Enter Virtual Center Server: 198.18.133.30 with these credentials.
Username: [email protected]
Password: C1sco12345!
b) Click Add.
Example:
After initial discovery, the information that is received from the vCenter is appropriately organized and
displayed in the main Topology window.
Step 4 In the Show list, select Compute to enable the compute visibility.
By default, the Host checkbox is selected. This implies that the topology shows the VMWare vSphere ESXi
hosts (servers) that are attached to the network switches.
The following options are available in the Compute Visualization feature.
• Host
• All
• VM Only
Step 6 Hover the mouse over the connection line between Leaf-1, Leaf-2, Leaf-3 and the ESXI host.
You can see the port interface to which the leafs have connected and the ESXI host information.
Example:
When changing from the Host suboption to the All suboption, all the compute resources are expanded.
When All is selected, an expanded view of all the hosts, virtual switches, and virtual machines that are part
of the topology are displayed. If a VM is powered off, it is shown in red, otherwise it is shown in green.
Step 8 Hover your mouse over ESXI host, vSwitch, and virtual machine to see more information.
Example:
Instead of browsing through the large set of available information, to focus on the VM only, you can change
the All suboption to VM Only. Server-2 is connected with Leaf-2, Server-3 is connected with Leaf-3, and
Server-1 is connected with Leaf-1.
Step 9 Hover over the VM to see network adaptor and Mac address information.
Example:
The Virtual Machine List allows you to view the complete list of virtual machines.
Step 10 Change the VM Only suboption back to Host and then, click VM List at the bottom.
Example:
Step 11 In the list of the VMs, click on the name of a VM to view additional information about that virtual machine.
You can also see the VLAN, vSwitch, Physical NIC, and Switch Interface information in the list.
Example:
A third interface is required when Inband management is used for a fabric via the eth1 interface. This ensures
that the management interface, used by Cisco DCNM for managing the devices, should not have any dependency
on the interface through which EPL BGP peering occurs.
After physical connectivity is established between Cisco DCNM and the fabric through a switch’s front-panel
interface, the configurations should be performed on the respective switches and Cisco DCNM.
After the BGP connectivity to the fabric is established via the BGP RR, DCNM receives BGP updates. These
are fed into a BigData DB.
For a VXLAN BGP EVPN based data center fabric, Endpoint Locator provides near real-time tracking of
every endpoint. Events such as an endpoint coming up, an endpoint going down, or an endpoint move are
now visible with a few simple clicks.
Note DCNM should still be open from previous scenarios. If it is not, double-click the DCNM shortcut on the
remote desktop and login with username admin and password C1sco12345.
Procedure
Step 1 Configure the Leaf-2 Eth1/3 interface which is connected to eth2 of DCNM (DCNM Inband connection with
fabric).
a) Click Control > Interfaces.
b) In the SCOPE drop-down, make sure that VXLAN-EVPN-Brownfield-Site-2 is selected.
c) In the Device Name field, enter Leaf-2.
d) Click the Leaf-2 checkbox where Name = Ethernet1/3 and then, click Edit.
Example:
h) Click OK.
Example:
Note After a couple of minutes, the Endpoint Activity screen is populated by the recently configured
Endpoint Locator. The two endpoints are server-1 and server-2, which are located on leaf-2 and
leaf-3. If you do not see both the endpoints, ping from server 1 and server 2 to gateway
10.10.10.1. The reason you might see one endpoint is because two hosts are in the same subnet,
and one of them will do ARP. You can enable suppress-arp or ping the gateway for an ARP
request from both endpoints.
Note If you have done the Scenario: Bulk Creation of Networks and VRFs, skip the following steps and continue
with the DCNM Restful API Documentation section.
Procedure
Step 7 In the Policy drop-down, select int_trunk_host_11_1 then, click Save and then, click Deploy.
Example:
Step 9 Verify that the Mode of Leaf-2 and Leaf-3 is now trunk.
a) In the Device Name field, enter Leaf.
Step 1 On the desktop of the workstation, right-click the DCNM APIs.txt file and then, select Edit with Notepad++.
Example:
Note This file contains the JSON configuration key-value pairs that you copy and paste in the following
steps.
Example:
Appendix A. Troubleshooting
Perform the procedure below if the steps in Import and Deploy Brownfield into DCNM fail (particularly the
ping between server-2 and server-3), indicating that the switches are either not responding or powered off.
Procedure
Step 1 On the demonstration workstation desktop, open a Chrome browser and a new tab.
Step 2 In the address bar, type in the IP address 198.18.133.33.
Step 3 Login as root with a password of C1sco12345.
Example:
Step 5 You can restart any of the VMs manually if you find any issue with that VM, or use the following below to
restart all VMs.
Step 6 Monitor server status in vCenter until all of the switches and servers are back online.
Example:
Note In this demo we use virtual switches (not physical hardware). After the Fix My Demo process has
finished, it will take approximately 10 minutes more to load the virtual switch operating systems.
You can monitor progress by opening an MTPuTTY session to Leaf-2 and watching the load process.