3 Virtual Private Routed Network Service: 3.1 in This Chapter
3 Virtual Private Routed Network Service: 3.1 in This Chapter
Each Virtual Private Routed Network (VPRN) consists of a set of customer sites
connected to one or more PE routers. Each associated PE router maintains a
separate IP forwarding table for each VPRN. Additionally, the PE routers exchange
the routing information configured or learned from all customer sites via MP-BGP
peering. Each route exchanged via the MP-BGP protocol includes a Route
Distinguisher (RD), which identifies the VPRN association and handles the possibility
of IP address overlap.
The service provider uses BGP to exchange the routes of a particular VPN among
the PE routers that are attached to that VPN. This is done in a way which ensures
that routes from different VPNs remain distinct and separate, even if two VPNs have
an overlapping address space. The PE routers peer with locally connected CE
routers and exchange routes with other PE routers in order to provide end-to-end
connectivity between CEs belonging to a given VPN. Since the CE routers do not
peer with each other there is no overlay visible to the CEs.
When BGP distributes a VPN route it also distributes an MPLS label for that route.
On an SR series router, the method of allocating a label to a VPN route depends on
the VPRN label mode and the configuration of the VRF export policy. SR series
routers support three label allocation methods: label per VRF, label per next hop, and
label per prefix.
Before a customer data packet travels across the service provider's backbone, it is
encapsulated with the MPLS label that corresponds, in the customer's VPN, to the
route which best matches the packet's destination address. The MPLS packet is
further encapsulated with one or additional MPLS labels or GRE tunnel header so
that it gets tunneled across the backbone to the proper PE router. Each route
CE3
VPN:Red
PE
CE1 CE5
VPN:Red P P VPN:Red
IP/MPLS Cloud
PE PE
P P
CE2 CE6
VPN:Green PE VPN:Green
CE4
VPN:Green
OSSG024
BGP was initially designed to distribute IPv4 routing information. Therefore, multi-
protocol extensions and the use of a VPN-IP address were created to extend BGP’s
ability to carry overlapping routing information. A VPN-IPv4 address is a 12-byte
value consisting of the 8-byte route distinguisher (RD) and the 4-byte IPv4 IP
address prefix. A VPN-IPv6 address is a 24-byte value consisting of the 8-byte RD
and 16-byte IPv6 address prefix. Service providers typically assign one or a small
number of RDs per VPN service network-wide.
L3 guide 10
The administrator field must contain an IP address (using private IP address space
is discouraged). The Assigned field contains a number assigned by the service
provider.
The administrator field must contain a 4-byte AS number (using private AS numbers
is discouraged). The Assigned field contains a number assigned by the service
provider.
eiBGP load balancing allows a route to have multiple nexthops of different types,
using both IPv4 nexthops and MPLS LSPs simultaneously.
Figure 11 displays a basic topology that could use eiBGP load balancing. In this
topology CE1 is dual homed and thus reachable by two separate PE routers. CE 2
(a site in the same VPRN) is also attached to PE1. With eiBGP load balancing, PE1
will utilize its own local IPv4 nexthop as well as the route advertised by MP-BGP, by
PE2.
PE1
VRF
IP/MPLS
CE1
VRF
PE2
al_0155
Another example displayed in Figure 12 shows an extra net VPRN (VRF). The traffic
ingressing the PE that should be load balanced is part of a second VPRN and the
route over which the load balancing is to occur is part of a separate VPRN instance
and are leaked into the second VPRN by route policies.
Here, both routes can have a source protocol of VPN-IPv4 but one will still have an
IPv4 nexthop and the other can have a VPN-IPv4 nexthop pointing out a network
interface. Traffic will still be load balanced (if eiBGP is enabled) as if only a single
VRF was involved.
PE1
VRF
VRF
IP/MPLS
CE1
VRF
PE2
al_0162
Traffic will be load balanced across both the IPv4 and VPN-IPv4 next hops. This
helps to use all available bandwidth to reach a dual-homed VPRN.
• Static Routes
• E-BGP
• RIP
• OSPF
• OSPF3
Each protocol provides controls to limit the number of routes learned from each CE
router.
Routing information learned from the CE-to-PE routing protocols and configured
static routes should be injected in the associated local VPN routing/forwarding
(VRF). In the case of dynamic routing protocols, there may be protocol specific route
policies that modify or reject certain routes before they are injected into the local
VRF.
VPN-IP routes imported into a VPRN, have the protocol type bgp-vpn to denote that
it is an VPRN route. This can be used within the route policy match criteria.
Static routes are used within many IES services and VPRN services. Unlike dynamic
routing protocols, there is no way to change the state of routes based on availability
information for the associated CPE. CPE connectivity check adds flexibility so that
unavailable destinations will be removed from the VPRN routing tables dynamically
and minimize wasted bandwidth. Figure 13 shows a setup with a directly connected
IP target and Figure 14 shows a setup with multiple hops to an IP target.
11.1.B.0
10.1.1.0/31 VPRN A
or
.2 .1 Service A
CPE
11.1.A.0 Backbone
11.1.B.0
10.2.2.0/24 10.1.1.0/31 VPRN A
or
254 .1 .2 .1 Service A
11.1.A.0 Management Backbone
CPE
The availability of the far-end static route is monitored through periodic polling. The
polling period is configured. If the poll fails a specified number of sequential polls, the
static route is marked as inactive.
Either ICMP ping or unicast ARP mechanism can be used to test the connectivity.
ICMP ping is preferred.
If the connectivity check fails and the static route is deactivated, the router will
continue to send polls and re-activate any routes that are restored.
The Route Target membership information is carried using MP-BGP, using an AFI
value of 1 and SAFI value of 132. In order for two routers to exchange RT
membership NLRI they must advertise the corresponding AFI/SAFI to each other
during capability negotiation. The use of MP-BGP means RT membership NLRI are
propagated, loop-free, within an AS and between ASes using well-known BGP route
selection and advertisement rules.
ORF can also be used for RT-based route filtering, but ORF messages have a limited
scope of distribution (to direct peers) and therefore do not automatically create
pruned inter-cluster and inter-AS route distribution trees.
RT Constraint is supported only by the base router BGP instance. When the family
command at the BGP router group or neighbor CLI context includes the route-target
keyword, the RT Constraint capability is negotiated with the associated set of EBGP
and IBGP peers.
ORF is mutually exclusive with RT Constraint on a particular BGP session. The CLI
will not attempt to block this configuration, but if both capabilities are enabled on a
session, the ORF capability will not be included in the OPEN message sent to the
peer.
When the base router has one or more RTC peers (BGP peers with which the RT
Constraint capability has been successfully negotiated), one RTC route is created for
each RT extended community imported into a locally-configured L2 VPN or L3 VPN
service. These imported route targets are configured in the following contexts:
• config>service>vprn
• config>service>vprn>mvpn
By default, these RTC routes are automatically advertised to all RTC peers, without
the need for an export policy to explicitly “accept” them. Each RTC route has a prefix,
a prefix length and path attributes. The prefix value is the concatenation of the origin
AS (a 4 byte value representing the 2- or 4-octet AS of the originating router, as
configured using the config>router>autonomous-system command) and 0 or 16-
64 bits of a route target extended community encoded in one of the following formats:
2-octet AS specific extended community, IPv4 address specific extended
community, or 4-octet AS specific extended community.
A router may be configured to send the default RTC route to any RTC peer. This is
done using the new default-route-target group/neighbor CLI command. The default
RTC route is a special type of RTC route that has zero prefix length. Sending the
default RTC route to a peer conveys a request to receive all VPN routes (regardless
of route target extended community) from that peer. The default RTC route is
typically advertised by a route reflector to its clients. The advertisement of the default
RTC route to a peer does not suppress other more specific RTC routes from being
sent to that peer.
All received RTC routes that are deemed valid are stored in the RIB-IN. An RTC route
is considered invalid and treated as withdrawn, if any of the following applies:
If multiple RTC routes are received for the same prefix value then standard BGP best
path selection procedures are used to determine the best of these routes.
The best RTC route per prefix is re-advertised to RTC peers based on the following
rules:
• The best path for a default RTC route (prefix-length 0, origin AS only with prefix-
length 32, or origin AS plus 16 bits of an RT type with prefix-length 48) is never
propagated to another peer.
• A PE with only IBGP RTC peers that is neither a route reflector or an ASBR does
not re-advertise the best RTC route to any RTC peer due to standard IBGP split
horizon rules.
• A route reflector that receives its best RTC route for a prefix from a client peer
re-advertises that route (subject to export policies) to all of its client and non-
client IBGP peers (including the originator), per standard RR operation. When
the route is re-advertised to client peers, the RR (i) sets the ORIGINATOR_ID
to its own router ID and (ii) modifies the NEXT_HOP to be its local address for
the sessions (for example, system IP).
• A route reflector that receives its best RTC route for a prefix from a non-client
peer re-advertises that route (subject to export policies) to all of its client peers,
per standard RR operation. If the RR has a non-best path for the prefix from any
of its clients, it advertises the best of the client-advertised paths to all non-client
peers.
• An ASBR that is neither a PE nor a route reflector that receives its best RTC
route for a prefix from an IBGP peer re-advertises that route (subject to export
policies) to its EBGP peers. It modifies the NEXT_HOP and AS_PATH of the re-
advertised route per standard BGP rules. No aggregation of RTC routes is
supported.
• An ASBR that is neither a PE nor a route reflector that receives its best RTC
route for a prefix from an EBGP peer re-advertises that route (subject to export
policies) to its EBGP and IBGP peers. When re-advertised routes are sent to
EBGP peers, the ABSR modifies the NEXT_HOP and AS_PATH per standard
BGP rules. No aggregation of RTC routes is supported.
Note: These advertisement rules do not handle hierarchical RR topologies properly. This is
a limitation of the current RT constraint standard.
• When the session comes up, the advertisement of the VPN routes is delayed for
a short while to allow RTC routes to be received from the peer.
• After the initial delay, the received RTC routes are analyzed and acted upon. If
S1 is the set of routes previously advertised to the peer and S2 is the set of
routes that should be advertised based on the most recent received RTC routes
then:
Set of routes in S1 but not in S2 should be withdrawn immediately (subject
to MRAI).
Set of routes in S2 but not in S1 should be advertised immediately (subject
to MRAI).
• If a default RTC route is received from a peer P1, the VPN routes that are
advertised to P1 is the set that:
a. are eligible for advertisement to P1 per BGP route advertisement rules AND
b. have not been rejected by manually configured export policies AND
c. have not been advertised to the peer
Note: This applies whether or not P1 advertised the best route for the default RTC prefix.
Note: This applies whether or not I1 advertised the best route for A.
If the best RTC route for a prefix A (origin-AS = A1, RT = A2/n, n > 48) is
received from an IBGP peer I1 in autonomous system B, the VPN routes
that are advertised to I1 is the set that:
a. are eligible for advertisement to I1 per BGP route advertisement rules
AND
b. have not been rejected by manually configured export policies AND
c. carry at least one route target extended community with value A2 in the
n most significant bits AND
d. have not been advertised to the peer
If the best RTC route for a prefix A (origin-AS = A1, RT = A2/n, n > 48) is
received from an EBGP peer E1, the VPN routes that are advertised to E1
is the set that:
a. are eligible for advertisement to E1 per BGP route advertisement rules
AND
b. have not been rejected by manually configured export policies AN
c. carry at least one route target extended community with value A2 in the
n most significant bits AND
d. have not been advertised to the peer
BGP fast reroute information specific to the base router BGP context is described in
the BGP Fast Reroute section of the Unicast Routing Protocols Guide.
IPv4 IPv4 route with next-hop A IPv4 route with next-hop B Yes
(ingress resolved by an IPv4 route resolved by an IPv4 route
PE)
IPv4 VPN-IPv4 route with next-hop A VPN-IPv4 route with next- Yes
(ingress resolved by a GRE, LDP, RSVP or hop A resolved by a GRE,
PE) BGP tunnel LDP, RSVP or BGP tunnel
MPLS IPv4 route with next-hop A IPv4 route with next-hop B Yes
(egress PE) resolved by an IPv4 route resolved by an IPv4 route
MPLS IPv4 route with next-hop A VPN-IPv4 route* with next- Yes
(egress PE) resolved by an IPv4 rout hop B resolved by a GRE,
LDP, RSVP or BGP tunnel
IPv6 IPv6 route with next-hop A IPv6 route with next-hop B Yes
(ingress resolved by an IPv6 route resolved by an IPv6 route
PE)
IPv6 VPN-IPv6 route with next-hop A VPN-IPv6 route with next- Yes
(ingress resolved by a GRE, LDP, RSVP or hop B resolved by a GRE,
PE) BGP tunnel LDP, RSVP or BGP tunnel
MPLS IPv6 route with next-hop A IPv6 route with next-hop B Yes
(egress) resolved by an IPv6 route resolved by an IPv6 route
In a VPRN context, BGP fast reroute is optional and must be enabled. Fast reroute
can be applied to all IPv4 prefixes, all IPv6 prefixes, all IPv4 and IPv6 prefixes, or to
a specific set of IPv4 and IPv6 prefixes.
If all IP prefixes require backup path protection, use a combination of the BGP
instance-level backup-path and VPRN-level enable-bgp-vpn-backup commands.
The VPRN BGP backup-path command enables BGP fast reroute for all IPv4
prefixes and/or all IPv6 prefixes that have a best path through a VPRN BGP peer.
The VPRN-level enable-bgp-vpn-backup command enables BGP fast reroute for
all IPv4 prefixes and/or all IPv6 prefixes that have a best path through a remote PE
peer.
If only some IP prefixes require backup path protection, use route policies to apply
the install-backup-path action to the best paths of the IP prefixes requiring
protection. See BGP Fast Reroute section of the Unicast Routing Protocols Guide for
more information.
advertise, to other PE routers, a BGP route that it has learned from a connected CE
device if that route is not its active route for the destination in the route table. If the
multi-homing scenario calls for all traffic destined for an IP prefix to be carried over a
preferred primary path (passing through PE1-CE1 for example), then all other PE
routers (PE2, PE3, and so on) will have that VPN route as their active route for the
destination, and they will not be able to advertise their own routes for the same IP
prefix.
• IP Interfaces
• Subscriber Interfaces
• SAPs
Encapsulations
QoS Policies
Filter Policies
• CE to PE Routing Protocols
PE to PE Tunneling Mechanisms
Per VRF Route Limiting
Using OSPF in IP-VPNs
• Spoke SDPs
Carrier Supporting Carrier (CsC)
• Multicast in IP-VPN Applications
3.3.1 IP Interfaces
VPRN customer IP interfaces can be configured with most of the same options found
on the core IP interfaces. The advanced configuration options supported are:
• VRRP
• Cflowd
• Secondary IP addresses
• ICMP Options
This section discusses QPPB as it applies to VPRN, IES, and router interfaces. Refer
to the QoS Policy Propagation Using BGP (QPPB) section and the IP Router
Configuration section in the Router Configuration Guide.
The QoS Policy Propagation Using BGP (QPPB) feature applies only to the
7450 ESS and 7750 SR.
QoS policy propagation using BGP (QPPB) is a feature that allows a route to be
installed in the routing table with a forwarding-class and priority so that packets
matching the route can receive the associated QoS. The forwarding-class and
priority associated with a BGP route are set using BGP import route policies. In the
industry this feature is called QPPB, and even though the feature name refers to
BGP specifically. On SR OS, QPPB is supported for BGP (IPv4, IPv6, VPN-IPv4,
VPN-IPv6), RIP and static routes.
While SAP ingress and network QoS policies can achieve the same end result as
QPPB, assigning a packet arriving on a particular IP interface to a specific
forwarding-class and priority/profile based on the source IP address or destination IP
address of the packet the effort involved in creating the QoS policies, keeping them
up-to-date, and applying them across many nodes is much greater than with QPPB.
In a typical application of QPPB, a BGP route is advertised with a BGP community
attribute that conveys a particular QoS. Routers that receive the advertisement
accept the route into their routing table and set the forwarding-class and priority of
the route from the community attribute.
QPPB may also be used to request that traffic sourced from certain networks receive
appropriate QoS handling in downstream nodes that may span different
administrative domains. This can be achieved by advertising the source prefix with a
BGP community, as discussed above. However, in this case other approaches are
equally valid, such as marking the DSCP or other CoS fields based on source IP
address so that downstream domains can take action based on a common
understanding of the QoS treatment implied by different DSCP values.
In the above examples, coordination of QoS policies using QPPB could be between
a business customer and its IP VPN service provider, or between one service
provider and another.
There may be times when a network operator wants to provide differentiated service
to certain traffic flows within its network, and these traffic flows can be identified with
known routes. For example, the operator of an ISP network may want to give priority
to traffic originating in a particular ASN (the ASN of a content provider offering over-
the-top services to the ISP’s customers), following a certain AS_PATH, or destined
for a particular next-hop (remaining on-net vs. off-net).
Figure 15 shows an example of an ISP that has an agreement with the content
provider managing AS300 to provide traffic sourced and terminating within AS300
with differentiated service appropriate to the content being transported. In this
example we presume that ASBR1 and ASBR2 mark the DSCP of packets
terminating and sourced, respectively, in AS300 so that other nodes within the ISP’s
network do not need to rely on QPPB to determine the correct forwarding-class to
use for the traffic. The DSCP or other COS markings could be left unchanged in the
ISP’s network and QPPB used on every node.
Peer
AS 200 ASBR 2
PE 1 ASBR 1
P
OSSG639
3.3.1.5 QPPB
There are two main aspects of the QPPB feature on the 7450 ESS and 7750 SR:
• the ability to associate a forwarding-class and priority with certain routes in the
routing table, and
• the ability to classify an IP packet arriving on a particular IP interface to the
forwarding-class and priority associated with the route that best matches the
packet.
This feature uses a command in the route-policy hierarchy to set the forwarding class
and optionally the priority associated with routes accepted by a route-policy entry.
The command has the following structure:
config>router>policy-options
begin
community gold members 300:100
policy-statement qppb_policy
entry 10
from
protocol bgp
community gold
exit
action accept
fc h1 priority high
exit
exit
exit
commit
The fc command is supported with all existing from and to match conditions in a route
policy entry and with any action other than reject, it is supported with next-entry, next-
policy and accept actions. If a next-entry or next-policy action results in multiple
matching entries then the last entry with a QPPB action determines the forwarding
class and priority.
A route policy that includes the fc command in one or more entries can be used in
any import or export policy but the fc command has no effect except in the following
types of policies:
As evident from above, QPPB route policies support routes learned from RIP and
BGP neighbors of a VPRN as well as for routes learned from RIP and BGP neighbors
of the base/global routing instance.
QPPB is supported for BGP routes belonging to any of the address families listed
below:
A VPN-IP route may match both a VRF import policy entry and a BGP import policy
entry (if vpn-apply-import is configured in the base router BGP instance). In this case
the VRF import policy is applied first and then the BGP import policy, so the QPPB
QoS is based on the BGP import policy entry.
This feature also introduces the ability to associate a forwarding-class and optionally
priority with IPv4 and IPv6 static routes. This is achieved by specifying the
forwarding-class within the static-route-entry next-hop or indirect context.
Priority is optional when specifying the forwarding class of a static route, but once
configured it can only be deleted and returned to unspecified by deleting the entire
static route.
The following commands are enhanced to show the forwarding-class and priority
associated with the displayed routes:
Similarly, when the qos-route-lookup command with the source parameter is applied
to an IP interface and the source address of an incoming IP packet matches a route
with QoS information the packet is classified to the fc and priority associated with that
route, overriding the fc and priority/profile determined from the SAP-Ingress or
network qos policy associated with the IP interface. If the source address of the
incoming packet matches a route with no QoS information the fc and priority of the
packet remain as determined by the SAP-Ingress or network qos policy.
Currently, QPPB is not supported for ingress MPLS traffic on network interfaces or
on CsC PE’-CE’ interfaces (config>service>vprn>nw-if).
When ECMP is enabled some routes may have multiple equal-cost next-hops in the
forwarding table. When an IP packet matches such a route the next-hop selection is
typically based on a hash algorithm that tries to load balance traffic across all the
next-hops while keeping all packets of a given flow on the same path. The QPPB
configuration model described in Associating an FC and Priority with a Route allows
different QoS information to be associated with the different ECMP next-hops of a
route. The forwarding-class and priority of a packet matching an ECMP route is
based on the particular next-hop used to forward the packet.
When BGP FRR is enabled some BGP routes may have a backup next-hop in the
forwarding table in addition to the one or more primary next-hops representing the
equal-cost best paths allowed by the ECMP/multipath configuration. When an IP
packet matches such a route a reachable primary next-hop is selected (based on the
hash result) but if all the primary next-hops are unreachable then the backup next-
hop is used. The QPPB configuration model described in Associating an FC and
Priority with a Route allows the forwarding-class and priority associated with the
backup path to be different from the QoS characteristics of the equal-cost best paths.
The forwarding class and priority of a packet forwarded on the backup path is based
When an IPv4 or IPv6 packet with destination address X arrives on an interface with
both QPPB and policy-based-routing enabled:
Source-address based QPPB is not supported on any SAP or spoke SDP interface
of a VPRN configured with the grt-lookup command.
When QPPB is enabled on a SAP IP interface the forwarding class of a packet may
change from fc1, the original fc determined by the SAP ingress QoS policy to fc2,
the new fc determined by QPPB. In the ingress datapath SAP ingress QoS policies
are applied in the first P chip and route lookup/QPPB occurs in the second P chip.
This has the implications listed below:
• Ingress remarking (based on profile state) is always based on the original fc (fc1)
and sub-class (if defined).
• The profile state of a SAP ingress packet that matches a QPPB route depends
on the configuration of fc2 only. If the de-1-out-profile flag is enabled in fc2 and
fc2 is not mapped to a priority mode queue then the packet will be marked out
of profile if its DE bit = 1. If the profile state of fc2 is explicitly configured (in or
out) and fc2 is not mapped to a priority mode queue then the packet is assigned
this profile state. In both cases there is no consideration of whether or not fc1
was mapped to a priority mode queue.
• The priority of a SAP ingress packet that matches a QPPB route depends on
several factors. If the de-1-out-profile flag is enabled in fc2 and the DE bit is set
in the packet then priority will be low regardless of the QPPB priority or fc2
mapping to profile mode queue, priority mode queue or policer. If fc2 is
associated with a profile mode queue then the packet priority will be based on
the explicitly configured profile state of fc2 (in profile = high, out profile = low,
undefined = high), regardless of the QPPB priority or fc1 configuration. If fc2 is
associated with a priority mode queue or policer then the packet priority will be
based on QPPB (unless DE=1), but if no priority information is associated with
the route then the packet priority will be based on the configuration of fc1 (if fc1
mapped to a priority mode queue then it is based on DSCP/IP prec/802.1p and
if fc1 mapped to a profile mode queue then it is based on the profile state of fc1).
Profile mode Profile mode From new From QPPB, unless From new From original FC
queue queue base FC packet is marked in or base FC and sub-class
unless out of profile in which
overridden by case follows profile.
DE=1 Default is high priority.
Priority mode Priority mode Ignored If DE=1 override then From new From original FC
queue queue low otherwise from base FC and sub-class
QPPB. If no DEI or
QPPB overrides then
from original dot1p/
exp/DSCP mapping or
policy default.
Policer Policer From new If DE=1 override then From new From original FC
base FC low otherwise from base FC and sub-class
unless QPPB. If no DEI or
overridden by QPPB overrides then
DE=1 from original dot1p/
exp/DSCP mapping or
policy default.
Policer Priority mode Ignored If DE=1 override then From new From original FC
queue low otherwise from base FC and sub-class
QPPB. If no DEI or
QPPB overrides then
from original dot1p/
exp/DSCP mapping or
policy default.
Profile mode Priority mode Ignored If DE=1 override then From new From original FC
queue queue low otherwise from base FC and sub-class
QPPB. If no DEI or
QPPB overrides then
follows original FC’s
profile mode rules.
Priority mode Profile mode From new From QPPB, unless From new From original FC
queue queue base FC packet is marked in or base FC and sub-class
unless out of profile in which
overridden by case follows profile.
DE=1 Default is high priority.
Profile mode Policer From new If DE=1 override then From new From original FC
queue base FC low otherwise from base FC and sub-class
unless QPPB. If no DEI or
overridden by QPPB overrides then
DE=1 follows original FC’s
profile mode rules.
Policer Profile mode From new From QPPB, unless From new From original FC
queue base FC packet is marked in or base FC and sub-class
unless out of profile in which
overridden by case follows profile.
DE=1 Default is high priority.
This feature introduces a generic operational group object which associates different
service endpoints (pseudowires and SAPs) located in the same or in different service
instances. The operational group status is derived from the status of the individual
components using certain rules specific to the application using the concept. A
number of other service entities, the monitoring objects, can be configured to monitor
the operational group status and to perform certain actions as a result of status
transitions. For example, if the operational group goes down, the monitoring objects
will be brought down.
This concept is used by an IPv4 VPRN interface to affect the operational state of the
IP interface monitoring the operational group. Individual SAP and spoke SDPs are
supported as monitoring objects.
The status of the operational group (oper-group) is dictated by the status of one or
more members according to the following rule:
• The oper-group goes down if all the objects in the oper-group go down. The
oper-group comes up if at least one of the components is up.
The simple configuration below shows the oper-group g1, the VPLS SAP that is
mapped to it and the IP interfaces in VPRN service 2001 monitoring the oper-group
g1. This is example uses an R-VPLS context. The VPLS instance includes the allow-
ip-int-bind and the service-name v1. The VPRN interface links to the VPLS using
the vpls v1 option. All commands are under the configuration service hierarchy.
oper-group g1 create
Subscriber Interfaces apply only to the 7450 ESS and 7750 SR.
3.3.3 SAPs
3.3.3.1 Encapsulations
The following SAP encapsulations are supported on the 7750 SR and 7950 XRS
VPRN service:
• Ethernet null
• Ethernet dot1q
• SONET/SDH IPCP
• SONET/SDH ATM
• ATM - LLC SNAP or VC-MUX
• Cisco HDLC
• QinQ
• LAG
• Tunnel (IPSec or GRE)
• Frame Relay
The router supports ATM PVC service encapsulation for VPRN SAPs on the
7750 SR only. Both UNI and NNI cell formats are supported. The format is
configurable on a SONET/SDH path basis. A path maps to an ATM VC. All VCs on
a path must use the same cell format.
Pseudowire SAPs are supported on VPRN interfaces for the 7750 SR in the same
way as on IES interfaces. For details of pseudowire SAPs, see Pseudowire SAPs.
With VPRN services, service egress QoS policies function as with other services
where the class-based queues are created as defined in the policy.
Both Layer 2 or Layer 3 criteria can be used in the QoS policies for traffic
classification in an VPRN.
DSCP marking for internally generated control and management traffic by marking
the DSCP value should be used for the given application. This can be configured per
routing instance. For example, OSPF packets can carry a different DSCP marking
for the base instance and then for a VPRN service. ISIS and ARP traffic is not an IP-
generated traffic type and is not configurable.
When an application is configured to use a specified DSCP value then the MPLS
EXP, Dot1P bits will be marked in accordance with the network or access egress
policy as it applies to the logical interface the packet will be egressing.
The DSCP value can be set per application. This setting will be forwarded to the
egress line card. The egress line card does not alter the coded DSCP value and
marks the LSP-EXP and IEEE 802.1p (Dot1P) bits according to the appropriate
network or access QoS policy.
ARP — — — Yes NC
SMTP Yes — — — AF
FTP Yes — — — AF
SNTP/NTP Yes — — — AF
RADIUS Yes — — — AF
Cflowd Yes — — — AF
default*0
*The default forwarding class mapping is used for all DSCP names/values for which
there is no explicit forwarding class mapping.
• The all parameter enables TTL propagation from the IP header into all labels in
the stack, for VPN-IPv4 and VPN-IPv6 packets forwarded in the context of all
VPRN services in the system.
• The vc-only parameter reverts to the default behavior by which the IP TTL is
propagated into the VC label but not to the transport labels in the stack. You can
explicitly set the default behavior by configuring the vc-only value.
• The none parameter disables the propagation of the IP TTL to all labels in the
stack, including the VC label. This is needed for a transparent operation of UDP
traceroute in VPRN inter-AS Option B such that the ingress and egress ASBR
nodes are not traced.
The user can override the global configuration within each VPRN instance using the
following commands:
The default behavior for a VPRN instance is to inherit the global configuration for the
same command. You can explicitly set the default behavior by configuring the inherit
value.
The commands do not apply when the VPRN packet is forwarded over GRE
transport tunnel.
If a packet is received in a VPRN context and a lookup is done in the Global Routing
Table (GRT), (when leaking to GRT is enabled for example), the behavior of the TTL
propagation is governed by the LSP shortcut configuration as follows:
When the matching route is a RFC 3107 label route or a 6PE route, It is governed by
the BGP label route configuration
When a packet is received on one VPRN instance and is redirected using Policy
Based Routing (PBR) to be forwarded in another VPRN instance, the TTL
propagation is governed by the configuration of the outgoing VPRN instance.
Packets that are forwarded in different contexts can use different TTL propagation
over the same BGP tunnel, depending on the TTL configuration of each context. An
example of this might be VPRN using a BGP tunnel and an IPv4 packet forwarded
over a BGP label route of the same prefix as the tunnel.
• BGP
• Static
• RIP
• OSPF
The 7750 SR and 7950 XRS support multiple mechanisms to provide transport
tunnels for the forwarding of traffic between PE routers within the 2547bis network.
The 7750 SR and 7950 XRS VPRN implementation supports the use of:
The 7750 SR and 7950 XRS allow setting the maximum number of routes that can
be accepted in the VRF for a VPRN service. There are options to specify a
percentage threshold at which to generate an event that the VRF table is near full
and an option to disable additional route learning when full or only generate an event.
This feature provides the ability to cross-connect traffic entering on a spoke SDP,
used for Layer 2 services (VLLs or VPLS), on to an IES or VPRN service. From a
logical point of view, the spoke SDP entering on a network port is cross-connected
to the Layer 3 service as if it entered by a service SAP. The main exception to this is
traffic entering the Layer 3 service by a spoke SDP is handled with network QoS
policies not access QoS policies.
SAP IES
Base
SAP IES Router
Instance
IES
VC:X
SDP
VC:Y
VPRN
VRF
al_0163
When a SAP down or SDP binding down status message is received by the PE in
which the Ipipe or Ethernet Spoke-SDP is terminated on an IES or VPRN interface,
the interface is brought down and all associated routes are withdrawn in a similar way
when the Spoke-SDP goes down locally. The same actions are taken when the
standby T-LDP status message is received by the IES/VPRN PE.
This feature can be used to provide redundant connectivity to a VPRN or IES from a
PE providing a VLL service, as shown in Figure 17.
VRF /
IEB
A
PE2
PW1 (Active)
ATM, FR, Ethernet A VPRN
or PPP
ipipe or epipe PWs
CE1 S PW2 (Standby)
PE1
VRF /
IP/MPLS A IEB
PE3
OSSG407
This feature can be used to provide redundant connectivity to a VPRN or IES from a
PE providing a VLL service, as shown in Figure 17, using either Epipe or Ipipe spoke-
SDPs. This feature is supported on the 7450 ESS and 7750 SR only.
In Figure 17, PE1 terminates two spoke-SDPs that are bound to one SAP connected
to CE1. PE1 chooses to forward traffic on one of the spoke SDPs (the active spoke-
SDP), while blocking traffic on the other spoke-SDP (the standby spoke-SDP) in the
transmit direction. PE2 and PE3 take any spoke-SDPs for which PW forwarding
standby has been signaled by PE1 to an operationally down state.
The 7450 ESS, 7750 SR, and 7950 XRS routers are expected to fulfill both functions
(VLL and VPRN/IES PE), while the 7705 SAR must be able to fulfill the VLL PE
function. Figure 18 illustrates the model for spoke-SDP redundancy into a VPRN or
IES.
Spoke-SDP
PE2
SAP
Spoke-SDP VPRN / IES
Secondary
PE1
Spoke-SDP IES / VRF
PE3
OSSG408
3.3.10 IP-VPNs
Some CPEs use the network up-link in PPPoE mode and perform dhcp-server
function for all ports on the LAN side. Instead of wasting 1 subnet for p2p uplink,
CPEs use allocated subnet for LAN portion as shown in Figure 19.
CPE
VPRN 1
BSR-1 LAN
(Subnet 10.10.10.0/24)
VPRN 2
From a BNG perspective, the given PPPoE host is allocated a subnet (instead of /
32) by RADIUS, external dhcp-server, or local-user-db. And locally, the host is
associated with managed-route. This managed-route will be subset of the
subscriber-interface subnet (on a 7450 ESS or 7750 SR), and also, subscriber-host
ip-address will be from managed-route range. The negotiation between BNG and
CPE allows CPE to be allocated both ip-address and associated subnet.
This feature should not be used for billing purposes. Existing queue counters are
designed for this purpose and provide very accurate per bit accounting records.
The first option, referred to as Option-A (Figure 20), is considered inherent in any
implementation. This method uses a back-to-back connection between separate
VPRN instances in each AS. As a result, each VPRN instance views the inter-AS
connection as an external interface to a remote VPRN customer site. The back-to-
back VRF connections between the ASBR nodes require individual sub-interfaces,
one per VRF.
VRF
VRF
VRF
ASBR ASBR
VRF VRF IP/MPLS
IP/MPLS
VRF VRF Backbone
Backbone VRF
VRF VRF AS65534
AS65535
EBGP
VRF
VRF
OSSG255
The second option, referred to as Option-B (Figure 21), relies heavily on the AS
Boundary Routers (ASBRs) as the interface between the autonomous systems. This
approach enhances the scalability of the eBGP VRF-to-VRF solution by eliminating
the need for per-VPRN configuration on the ASBR(s). However it requires that the
ASBR(s) provide a control plan and forwarding plane connection between the
autonomous systems. The ASBR(s) are connected to the PE nodes in its local
autonomous system using iBGP either directly or through route reflectors. This
means the ASBR(s) receive all the VPRN information and will forward these VPRN
updates, VPN-IPV4, to all its EBGP peers, ASBR(s), using itself as the next-hop. It
also changes the label associated with the route. This means the ASBR(s) must
maintain an associate mapping of labels received and labels issued for those routes.
The peer ASBR(s) will in turn forward those updates to all local IBGP peers.
VPNV4 UPD
VPNV4 UPD
L=V1 VPNV4 UPD
VRF L=V3
L=V2
VRF
VRF
ASBR ASBR
L=P1 L=P3
PE/32 IP
IP/MPLS
IP/MPLS VRF
EBGP Backbone
Backbone
AS65534
AS65535
VRF
VRF
L=Vx VPRN Label
L=Px BGP Next Hop
OSSG256
This form of inter-AS VPRNs performs all necessary mapping functions and the PE
routers do not need to perform any additional functions than in a non-Inter-AS VPRN.
On the 7750 SR, this form of inter-AS VPRNs does not require instances of the
VPRN to be created on the ASBR, as in option-A, as a result there is less
management overhead. This is also the most common form of Inter-AS VPRNs used
between different service providers as all routes advertised between autonomous
systems can be controlled by route policies on the ASBRs by the
config>router>bgp>transport-tunnel CLI command. The third option, referred to
as Option-C (Figure 22), allows for a higher scale of VPRNs across AS boundaries
but also expands the trust model between ASNs. As a result this model is typically
used within a single company that may have multiple ASNs for various reasons. This
model differs from Option-B, in that in Option-B all direct knowledge of the remote AS
is contained and limited to the ASBR. As a result, in option-B the ASBR performs all
necessary mapping functions and the PE routers do not need to perform any
additional functions than in a non-Inter-AS VPRN.
VRF
VRF RFC3107 for /32
ASBR PE Addresses ASBR
VPNv4 UPD L=P1
L=P2
L=P3
L=V1 IP/MPLS IP/MPLS VPNv4 UPD
Backbone EBGP Backbone L=V1
AS65535 AS65534
L=Vx VPRN Label
RR RR
L=Px BGP Next Hop
With Option-C, knowledge from the remote AS is distributed throughout the local AS.
This distribution allows for higher scalability but also requires all PEs and ASBRs
involved in the Inter-AS VPRNs to participate in the exchange of inter-AS routing
information.
In Option-C, the ASBRs distribute reachability information for remote PE’s system IP
addresses only. This is done between the ASBRs by exchanging MP-eBGP labeled
routes, using RFC 3107, Carrying Label Information in BGP-4 . Either RSVP-TE or
LDP LSP can be selected to resolve next-hop for multi-hop eBGP peering by the
config>router>bgp>transport-tunnel CLI command.
• CSC-CE1, CSC-CE2 and CSC-CE3 (CE device from the point of view of the
backbone service provider)
• CSC-PE1, CSC-PE2 and CSC-PE3 (PE device of the backbone service
provider)
• ASBR1 and ASBR2 (ASBR of the backbone service provider)
CSC-PE3 CSC-CE3
VRF GRT
CE3
ASBR 2 Service
3.3.14.1 Terminology
CSC-PE — a PE router belonging to the backbone service provider that supports one
or more CSC IP VPN services
The CSC-CE is a “CE” from the perspective of the backbone service provider. There
may be multiple CSC-CEs at a given customer service provider site and each one
may connect to multiple CSC-PE devices for resiliency/multi-homing purposes.
The CSC-PE is owned and managed by the backbone service provider and provides
CSC IP VPN service to connected CSC-CE devices. In many cases, the CSC-PE
also supports other services, including regular business IP VPN services. A single
CSC-PE may support multiple CSC IP VPN services. Each customer service
provider is allocated its own VRF within the CSC-PE; VRFs maintain routing and
forwarding separation and permit the use of overlapping IP addresses by different
customer service providers.
A backbone service provider may not have the geographic span to connect, with
reasonable cost, to every site of a customer service provider. In this case, multiple
backbone service providers may coordinate to provide an inter-AS CSC service.
Different inter-AS connectivity options are possible, depending on the trust
relationships between the different backbone service providers.
The CSC Connectivity Models apply to the 7750 SR and 7950 XRS only.
From the point of view of the CSC-PE, the IP/MPLS interface between the CSC-PE
and a CSC-CE has these characteristics:
1. The CSC interface is associated with one (and only one) VPRN service. Routes
with the CSC interface as next-hop are installed only in the routing table of the
associated VPRN.
2. The CSC interface supports EBGP or IBGP for exchanging labeled IPv4 routes
(RFC 3107). The BGP session may be established between the interface
addresses of the two routers or else between a loopback address of the CSC-
PE VRF and a loopback address of the CSC-CE. In the latter case, the BGP
next-hop is resolved by either a static or OSPFv2 route.
3. An MPLS packet received on a CSC interface is dropped if the top-most label
was not advertised over a BGP (RFC 3107) session associated with one of the
VPRN’s CSC interfaces.
4. The CSC interface supports ingress QoS classification based on 802.1p or
MPLS EXP. It is possible to configure a default FC and default profile for the
CSC interface.
5. The CSC interface supports QoS (re)marking for egress traffic. Policies to
remark 802.1p or MPLS EXP based on forwarding-class and profile are
configurable per CSC interface.
6. By associating a port-based egress queue group instance with a CSC interface,
the egress traffic can be scheduled/shaped with per-interface, per-forwarding-
class granularity.
7. By associating a forwarding-plane based ingress queue group instance with a
CSC interface, the ingress traffic can be policed to per-interface, per-forwarding-
class granularity.
8. Ingress and egress statistics and accounting are available per CSC interface.
The exact set of collected statistics depends on whether a queue-group is
associated with the CSC interface, the traffic direction (ingress vs. egress), and
the stats mode of the queue-group policers.
An Ethernet port or LAG with a CSC interface can be configured in hybrid mode or
network mode. The port or LAG supports null, dot1q or qinq encapsulation. To create
a CSC interface on a port or LAG in null mode, the following commands are used:
config>service>vprn>nw-if>port port-id
config>service>vprn>nw-if>lag lag-id
To create a CSC interface on a port or LAG in dot1q mode, the following commands
are used:
config>service>vprn>nw-if>port port-id:qtag1
config>service>vprn>nw-if>lag port-id:qtag1
To create a CSC interface on a port or LAG in qinq mode, the following commands
are used:
config>service>vprn>nw-if>port port-id:qtag1.qtag2
config>service>vprn>nw-if>port port-id:qtag1.*
config>service>vprn>nw-if>lag port-id:qtag1.qtag2
config>service>vprn>nw-if>lag port-id:qtag1.*
A CSC interface supports the same capabilities (and supports the same commands)
as a base router network interface except it does not support:
• IPv6
• LDP
• RSVP
• Proxy ARP (local/remote)
• Network domain configuration
• DHCP
• Ethernet CFM
• Unnumbered interfaces
3.3.14.5 QoS
Egress
Step 7. Apply the network QoS policy created in step 4 to the CSC interface and
specify the name of the egress queue-group created in step 1 and the
specific instance defined in Step 3.
Ingress
3.3.14.6 MPLS
BGP-3107 is used as the label distribution protocol on the CSC interface. When BGP
in a CSC VPRN needs to distribute a label corresponding to a received VPN-IPv4
route, it takes the label from the global label space. The allocated label will not be re-
used for any other FEC regardless of the routing instance (base router or VPRN). If
a label L is advertised to the BGP peers of CSC VPRN A then a received packet with
label L as the top most label is only valid if received on an interface of VPRN A,
otherwise the packet is discarded.
To use BGP-3107 as the label distribution protocol on the CSC interface, add the
family label-ipv4 command to the family configuration at the instance, group, or
neighbor level. This causes the capability to send and receive labeled-IPv4 routes
{AFI=1, SAFI=4} to be negotiated with the CSC-CE peers.
• GRE SDPs
Other aspects of VPRN configuration are the same in a CSC model as in a non-CSC
model.
Packets entering a local VRF interface can have route processing results derived
from the VPRN forwarding table or the GRT. The leaking and preferred lookup
results are configured on a per VPRN basis. Configuration options can be general
(for example, any lookup miss in the VRPN forwarding table can be resolved in the
GRT), or specific (for example, specific route(s) should only be looked up in the GRT
and ignored in the VPRN). In order to provide operational simplicity and improve
streamlining, the CLI configuration is all contained within the context of the VPRN
service.
When a VPRN forwarding table consists of a default route or an aggregate route, the
customer may require the service provider to poke holes in those, or provide more
specific route resolution in the GRT. In this case, the service provider may configure
a static-route-entry and specify the GRT as the nexthop type.
The lookup result will prefer any successful lookup in the GRT that is equal to or more
specific than the static route, bypassing any successful lookup in the local VPRN.
This feature and Unicast Reverse Path Forwarding (uRPF) are mutually exclusive.
When a VPRN service is configured with either of these functions, the other cannot
be enabled. Also, prefixes leaked from any VPRN should never conflict with prefixes
leaked from any other VPRN or existing prefixes in the GRT. Prefixes should be
globally unique with the service provider network and if these are propagated outside
of a single providers network, they must be from the public IP space and globally
unique. Network Address Translation (NAT) is not supported as part of this
feature.The following type of routes will not be leaked from VPRN into the Global
Routing Table (GRT):
• Aggregate routes
• Blackhole routes
• BGP VPN extranet routes
This feature enables IPv6 capability in addition to the existing IPv4 VPRN-to-GRT
Route Leaking feature.
The RIP metric can also be used to exchanged between PE routers so if a customer
network is dual homed to separate PEs the RIP metric learned from the CE router
can be used to choose the best route to the destination subnet. By using the learned
RIP metric to set the BGP MED attribute, remote PEs can choose the lowest MED
and in turn the PE with the lowest advertised RIP metric as the preferred egress point
for the VPRN. Figure 24 shows RIP metric propagation in VPRNs.
MP-BGP: 128.1.1.0/24
128.1.1.0/24 MED 2
VPRN A
VPRN A
7750-2
RIP: 128.1.1.0/24 7750-1 RIP: 128.1.1.0/24
Metric 2 Metric: 2 + Interface
Cost
OSSG181
Communication with external servers and peers are controlled using the same
commands as used for access via base routing (see “Network Time Protocol (NTP)”
in the Basic System Configuration Guide). Communication with external clients is
controlled via the VPRN routing configuration. The support for the external clients
can be as unicast or broadcast service. In addition, authentication keys for external
clients are configurable on a per-VPRN basis.
Only a single instance of NTP remains in the node that is time sourced to as many
as five NTP servers attached to the base or management network.
The NTP show command displays NTP servers and all known clients. Because NTP
is UDP-based only, no state is maintained. As a result, the show command output
only displays when the last message from the client was received.
For more detail on PTP see the Basic System Configuration Guide.
Label per VRF is the label allocation default. It is used when the label mode is
configured as VRF (or not configured) and the VRF export policies do not apply an
advertise-label per-prefix action. All routes exported from the VPRN with the per-VRF
label have the same label value. When the PE receives a terminating MPLS packet
with a per-VRF label, the label value selects the VRF context in which to perform a
forwarding table lookup and this lookup determines the outgoing interface (or set of
interfaces if ECMP applies).
Label per next hop is used when the exported route is not a local or aggregate route,
the label mode is configured as next-hop, and the VRF export policies do not apply
an advertise-label per-prefix override. It is also used when an inactive (backup path)
BGP route is exported by effect of the export-inactive-bgp command if there is no
advertise-label per-prefix override. All LPN-exported routes with the same primary
next hop have the same per-next-hop label value. When the PE receives a
terminating MPLS packet with a per-next-hop label, the label lookup selects the
outgoing interface for forwarding, without any FIB lookup that might cause problems
with overlapping prefixes. LPN does not support ECMP, BGP fast reroute, QPPB, or
policy accounting features that might otherwise apply.
The following points summarize the logic that determines the label allocation method
for an exported route:
• If the IP route is LOCAL, AGGREGATE, or BGP-VPN always use the VRF label.
• If the IP route is accepted by a VRF export policy with the advertise-label per-
prefix action, use LPP.
• If the IP (BGP) route is exported by the export-inactive-bgp command (VPRN
best external), use LPN.
• If the IP route is exported by a VPRN configured for label-mode next-hop, use
LPN.
• Else, use the per-VRF label.
To change the service label mode of the VPRN for the 7750 SR, the
config>service>vprn>label-mode {vrf | next-hop} command is used:
The default mode (if the command is not present in the VPRN configuration) is vrf
meaning distribution of one service label for all routes of the VPRN. When a VPRN
X is configured with the label-mode next-hop command the service label that it
distributes with an IPv4 or IPv6 route that it exports depends on the type of route as
summarized in Table 33.
BGP route with a backup next-hop (BGP platform-wide unique label allocated to
FRR) next-hop A (the lowest next-hop address of
the primary next-hops)
The service label per next-hop mode has the following restrictions (applies only to
the 7750 SR):
• ECMP — The VPRN label mode should be set to VRF if distribution of traffic
across the multiple PE-CE next-hop interfaces of an ECMP route is desired.
• Hub and spoke VPN — The VPRN label mode should not be set to next-hop if
the operator does not want the hub-connected CE to be involved in the
forwarding of spoke-to-spoke traffic.
When flowspec rules are embedded into a user-defined filter policy, the insertion
point of the rules is configurable through the offset parameter of the embed-filter
command. The offset is 0 by default, meaning that the flowspec rules are evaluated
after all other rules. The sum of the ip-filter-max-size value parameter and the highest
offset in any IPv4 or IPv6 filter that embeds IPv4 or IPv6 flowspec rules from this
routing instance (excluding filters that embed at offset 65535) must not exceed
65535.
VPRN MPLS/IP
al_0647
An ingress queue group must be configured and applied to the ingress network FP
where the traffic is received for the VPRN. All traffic received on that FP for any
binding in the VPRN (either automatically or statically configured) which is redirected
to a policer in the FP queue group (using fp-redirect-group in the network QoS
policy) will be controlled by that policer. As a result, the traffic from all such bindings
is treated as a single entity (per forwarding class) with regard to ingress QoS control.
Any fp-redirect-group multicast-policer, broadcast-policer or unknown-policer
commands in the network QoS policy are ignored for this traffic (IP multicast traffic
would use the ingress network queues or queue group related to the network
interface).
Ingress bandwidth control does not take into account the outer Ethernet header, the
MPLS labels/control word or GRE headers, or the FCS of the incoming frame.
The following command configures the association of the network QoS policy and the
FP queue group and instance within the network ingress of a VPRN:
configure
vprn
network
ingress
qos <network-policy-id> fp-redirect-group <queue-group-name>
instance <instance-id>
When this command is configured, it overrides the QoS applied to the related
network interfaces for unicast traffic arriving on bindings in that VPRN. The IP and
IPv6 criteria statements are not supported in the applied network QoS policy.
This is supported for all available transport tunnel types and is independent of the
label mode (vrf or next-hop) used within the VPRN. It is also supported for Carrier-
Supporting-Carrier VPRNs.
The ingress network interfaces on which the traffic is received must be on FP2- and
higher-based hardware.
13
5 6
13
10 1 2 7
17 14
9 4 3 8
16 15
OSSG063
The demarcation of these domains is in the PE’s (routers 5 through 10). The PE
router participates in both the customer multicast domain and the provider’s multicast
domain. The customer’s CEs are limited to a multicast adjacency with the multicast
instance on the PE specifically created to support that specific customer’s IP-VPN.
This way, customers are isolated from the provider’s core multicast domain and other
customer multicast domains while the provider’s core routers only participate in the
provider’s multicast domain and are isolated from all customers’ multicast domains.
The PE for a given customer’s multicast domain becomes adjacent to the CE routers
attached to that PE and to all other PE’s that participate in the IP-VPN (or customer)
multicast domain. This is achieved by the PE who encapsulates the customer
multicast control data and multicast streams inside the provider’s multicast packets.
These encapsulated packets are forwarded only to the PE nodes that are attached
to the same customer’s edge routers as the originating stream and are part of the
same customer VPRN. This prunes the distribution of the multicast control and data
traffic to the PEs that participate in the customer’s multicast domain. The Rosen draft
refers to this as the default multicast domain for this multicast domain; the multicast
domain is associated with a unique multicast group address within the provider’s
network.
PEs that do not require the SG to be delivered, keep state to allow them to join the
data MDT as required.
When the bandwidth requirement no longer exceeds the threshold, the PE stops
announcing the MDT join TLV. At this point the PEs using the data MDT will leave
this group and transmission resumes over the default MDT.
Sampling to check if an s,g has exceeded the threshold occurs every ten seconds. If
the rate has exceeded the configured rate in that sample period then the data MDT
is created. If during that period the transmission rate has not exceeded the
configured threshold then the data MDT is not created. If the data MDT is active and
the transmission rate in the last sample period has not exceeded the configured rate
then the data MDT is torn down and the multicast stream resumes transmission over
the default MDT.
An MVPN is defined by two sets of sites: sender sites set and receiver sites set, with
the following properties:
• Hosts within the sender sites set could originate multicast traffic for receivers in
the receiver sites set.
• Receivers not in the receiver sites set should not be able to receive this traffic.
• Hosts within the receiver sites set could receive multicast traffic originated by
any host in the sender sites set.
• Hosts within the receiver sites set should not be able to receive multicast traffic
originated by any host that is not in the sender sites set.
A site could be both in the sender sites set and receiver sites set, which implies that
hosts within such a site could both originate and receive multicast traffic. An extreme
case is when the sender sites set is the same as the receiver sites set, in which case
all sites could originate and receive multicast traffic from each other.
Sites within a given MVPN may be either within the same, or in different
organizations, which implies that an MVPN can be either an intranet or an extranet.
A given site may be in more than one MVPN, which implies that MVPNs may overlap.
Not all sites of a given MVPN have to be connected to the same service provider,
which implies that an MVPN can span multiple service providers.
The PE router uses route targets to specify MVPN route import and export. The route
target may be the same as the one used for the corresponding unicast VPN, or it may
be different. The PE router can specify separate import route targets for sender sites
and receiver sites for a given MVPN.
The route distinguisher (RD) that is used for the corresponding unicast VPN can also
be used for the MVPN.
Table 34 and Table 35 describe the supported configuration combinations. If the CLI
combination is not allowed, the system returns an error message. If the CLI
command is marked as “ignored” in the table, the configuration is not blocked, but its
value is ignored by the software.
Yes or No No Allowed
MDT-SAFI No Ignored
MVPN implementation based on RFC 6037, Cisco Systems’ Solution for Multicast in
MPLS/BGP IP VPNs can support membership auto-discovery using BGP MDT-
SAFI. A CLI option is provided per MVPN instance to enable auto-discovery either
using BGP MDT-SAFI or NG-MVPN. Only PIM-MDT is supported with BGP MDT-
SAFI method.
A PE that has sites of a given MVPN connected to it communicates the value of the
c-multicast import RT associated with the VRF of that MVPN on the PE to all other
PEs that have sites of that MVPN. To accomplish this, a PE that originates a (unicast)
route to VPN-IP addresses includes in the BGP updates message that carries this
route the VRF route import extended community that has the value of the c-multicast
import RT of the VRF associated with the route, except if it is known a priori that none
of these addresses will act as multicast sources and/or RP, in which case the
(unicast) route need not carry the VRF Route Import extended community.
All c-multicast routes with the c-multicast import RT specific to the VRF must be
accepted. In this release, vrf-import and vrftraget policies don’t apply to C-multicast
routes.
if (route-type == c-mcast-route)
else
drop;
else
BGP C-multicast signaling must be enabled for an MVPN instance to use P2MP
RSVP-TE or LDP as I-PMSI (equivalent to ‘Default MDT’, as defined in draft Rosen
MVPN) and S-PMSI (equivalent to ‘Data MDT’, as defined in draft Rosen MVPN).
By default, all PE nodes participating in an MVPN receive data traffic over I-PMSI.
Optionally, (C-*, C-*) wildcard S-PMSI can be used instead of I-PMSI. See
section 3.5.6.4 for more information. For efficient data traffic distribution, one or more
S-PMSIs can be used, in addition to the default PMSI, to send traffic to PE nodes that
have at least one active receiver connected to them. For more information, see
P2MP LSP S-PMSI.
Only one unique multicast flow is supported over each P2MP RSVP-TE or P2MP
LDP LSP S-PMSI. Number of S-PMSI that can be initiated per MVPN instance is
restricted by CLI command maximum-p2mp-spmsi. P2MP LSP S-PMSI cannot be
used for more than one (S,G) stream (that is, multiple multicast flow) as number of
S-PMSI per MVPN limit is reached. Multicast flows that cannot switch to S-PMSI
remain on I-PMSI.
RSVP-TE LSP template must be defined (see MPLS user guide) and bound to
MVPN as inclusive or selective (S-PMSI is for efficient data distribution and is
optional) provider tunnel to dynamically initiate P2MP LSP to the leaf PE nodes
learned via NG-MVPN auto-discovery signaling. Each P2MP LSP S2L is signaled
based on parameters defined in LSP template.
The multicast-traffic CLI command must be configured per LDP interface to enable
P2MP LDP setup. P2MP LDP must also be configured as inclusive or selective (S-
PMSI is for efficient data distribution and is optional) provider tunnel per MVPN to
dynamically initiate P2MP LSP to leaf PE nodes learned via NG-MVPN auto-
discovery signaling.
Wildcard S-PMSI allows usage of selective tunnel as a default tunnel for a given
MVPN. By using wildcard S-PMSI, operators can avoid full mesh of LSPs between
MVPN PEs, reducing related signaling, state, and BW consumption for multicast
distribution (no traffic is sent to PEs without any receivers active on the default
PMSI).
1. Route Reflector
2. Receiver PEs
3. backup UMH
4. active UMH
Wildcard C-S and C-G values are encoded as defined in RFC6625: using zero for
Multicast Source Length and Multicast Group Length and omitting Multicast Source
and Multicast Group values respectively in MCAST_VPN_NLRI. For example, a (C-
*, C-*) will be advertised as: RD, 0x00, 0x00, and originating router's IP address.
Note: All SR OSs with BGP peering session to the PE with RFC6625 support enabled must
be upgraded to SR OS release 13.0 before the feature is enabled. Failure to do so will result
in the following processing on a router with BGP peering session to an RFC6625-enabled
PE:
• BGP peer running Release 12.0 version R4 or newer will accept 0-length address and
it will keep encoding length 4 with all zeros for the address
• BGP peer running Release 12 version R3 or older will not accept 0-length address and
will keep restarting BGP session
• UMH PEs advertise I-PMSI A-D routes without tunnel information present
(empty PTA) - encoded as per RFC6513/6514 prior to advertising wildcard S-
PMSI. I-PMSI needs to be signaled and installed on receiver PEs, because (C-
*, C-*) S-PMSI is only installed when a first receiver is added. However, no LSP
is established for I-PMSI).
• UMH PEs advertise S-PMSI A-D route whose NLRI contains (C-*, C-*) with
tunnel information encoded as per RFC 6625.
• Receiver PEs join wildcard S-PMSI if there are any receivers present.
Note: If UMH PE does not encode I-PMSI/S-PMSI A-D routes as per the above, or
advertises both I-PMSI and wildcard S-PMSI with the tunnel information present, no
interoperability can be achieved.
To ensure proper operation of BSR between PEs with (C-*, C-*) S-PMSI signaling,
SR OS implements two modes of operations for BSR.
• BSR PDUs are sent/forwarded as unicast PDUs to neighbor PEs when I-PMSI
with Pseudo-tunnel interface is installed.
• At every BSR interval timer BSR Unicast PDU are sent to all IPMSI interfaces
when this is an elected BSR.
• BSMs received as multicast from C-instance interfaces are flooded as unicast in
the P-instance.
• All PEs process BSR PDU's received on I-PMSI Pseudo-tunnel interface as
unicast packets.
• BSR PDU's are not forwarded to PE's management control interface.
• BSR unicast PDU's use PE's System IP address as destination IP and sender
PE's System address as Source IP.
• The BSR unicast functionality ensures that no special state needs to be created
for BSR when (C-*, C-*) S-PMSI is enabled, which is beneficiary considering low
volume of BSR traffic.
Note:
• For bsr unicast, the base IPv4 system address (IPv4) or the mapped version of the
base IPv4 system address (IPv6) must be configured under the VPRN to ensure bsr
unicast messages can reach the VPRN.
• For bsr spmsi, the base IPv4/IPv6 system address must be configured under the VPRN
to ensure B-SR S-PMSI's are established.
BSR S-PMSI mode can be enabled to allow interoperability with other vendors. In
that mode full mesh S-PMSI is required and created between all PEs in MVPN to
exchange BSR PDUs between all PEs in MVPN. To operate as expected, the BSR
S-PMSI mode requires a selective P-tunnel configuration. For IPv6 support
(including dual-stack) of BSR S-PMSI mode, the IPv6 default system interface
address must be configured as a loopback interface address under the VPRN and
VPRN PIM contexts. Changing BSR signaling requires a VPRN shutdown.
Other key feature interactions and caveats for (C-*, C-*) include the following:
NG-MVPN support P2MP RSVP-TE and P2MP LDP LSPs as selective provider
multicast service interface (S-PMSI). S-PMSI is used to avoid sending traffic to PEs
that participate in multicast VPN, but do not have any receivers for a given C-
multicast flow. This allows more-BW efficient distribution of multicast traffic over the
provider network, especially for high bandwidth multicast flows. S-PMSI is spawned
dynamically based on configured triggers as described in S-PMSI trigger thresholds
section.
In MVPN, the head-end PE firstly discovers all the leaf PEs via I-PMSI A-D routes. It
then signals the P2MP LSP to all the leaf PEs using RSVP-TE. In the scenario of S-
PMSI:
1. The head-end PE sends an S-PMSI A-D route for a specific C-flow with the Leaf
Information Required bit set.
2. The PEs who are interested in the C-flow respond with Leaf A-D routes.
3. The head-end PE then signals the P2MP LSP to all the leaf PEs using RSVP-
TE.
Also, because the receivers may come and go, the implementation supports
dynamically adding and pruning leaf nodes to and from the P2MP LSP.
When the tunnel type in the PMSI attribute is set to RSVP-TE P2MP LSP, the tunnel
identifier is <Extended Tunnel ID, Reserved, Tunnel ID, P2MP ID>, as carried in the
RSVP-TE P2MP LSP SESSION Object.
The PE can also learn via an A-D route that it needs to receive traffic on a particular
RSVP-TE P2MP LSP before the LSP is actually setup. In this case, the PE needs to
wait until the LSP is operational before it can modify its forwarding tables as directed
by the A-D route.
Because of the way that LDP normally works, mLDP P2MP LSPs are setup without
solicitation from the leaf PEs towards the head-end PE. The leaf PE discovers the
head-end PE via I-PMSI or S-PMSI A-D routes. The tunnel identifier carried in the
PMSI attribute is used as the P2MP FEC element. The tunnel identifier consists of
the head-end PE’s address, along with a Generic LSP identifier value. The Generic
LSP identifier value is automatically generated by the head-end PE.
This feature provides a multicast signaling solution for IP-VPNs, allowing the
connection of IP multicast sources and receivers in C-instances, which are running
PIM multicast protocol using Rosen MVPN with BGP SAFI and P2MP mLDP in P-
instance. The solution dynamically maps each PIM multicast flow to a P2MP LDP
LSP on the source and receiver PEs.
The feature uses procedures defined in RFC 7246: Multipoint Label Distribution
Protocol In-Band Signaling in Virtual Routing and Forwarding (VRF) Table Context.
On the receiver PE, PIM signaling is dynamically mapped to the P2MP LDP tree
setup. On the source PE, signaling is handed back from the P2MP mLDP to the PIM.
Due to dynamic mapping of multicast IP flow to P2MP LSP, provisioning and
maintenance overhead is eliminated as multicast distribution services are added and
removed from the VRF. Per (C-S, C-G) IP multicast state is also removed from the
network, since P2MP LSPs are used to transport multicast flows.
Router A Router B
IPv4 mLDP
VRF VRF
Multicast
Sources
As illustrated in Figure 27, P2MP LDP LSP signaling is initiated from the receiver PE
that receives PIM JOIN from a downstream router (Router A). To enable dynamic
multicast signaling, the p2mp-ldp-tree-join must be configured on PIM customer-
facing interfaces for the given VPRN of Router A. This enables handover of multicast
tree signaling from the PIM to the P2MP LDP LSP. Being a leaf node of the P2MP
LDP LSP, Router A selects the upstream-hop as the root node of P2MP LDP FEC,
based on a routing table lookup. If an ECMP path is available for a given route, then
the number of trees are equally balanced towards multiple root nodes. The PIM joins
are carried in the Transit Source PE (Router B), and multicast tree signaling is
handed back to the PIM and propagated upstream as native-IP PIM JOIN toward C-
instance multicast source.
The feature is supported with IPv4 and IPv6 PIM SSM and IPv4 mLDP. Directly
connected IGMP/MLD receivers are also supported, with PIM enabled on outgoing
interfaces and SSM mapping configured, if required.
• Inter-AS and IGP inter-area scenarios where the originating router is altered at
the ASBR and ABR respectively, (hence PIM has no way to create the LDP LSP
towards the source), are not supported.
• When dynamic mLDP signaling is deployed, a change in Route Distinguisher
(RD) on the Source PE is not acted upon for any (C-S, C-G)s until the receiver
PEs learn about the new RD (via BGP) and send explicit delete and create with
the new RD.
• Procedures of Section 2 of RFC 7246 for a case where UMH and the upstream
PE do not have the same IP address are not supported.
For PE nodes that host only multicast sources for a given VPN, operators can now
block those PEs, through configuration, from joining I-PMSIs from other PEs in this
MVPN. For PE nodes that host only multicast receivers for a given VPN, operators
can now block those PEs, through configuration, to set-up a local I-PMSI to other
PEs in this MVPN.
The mLDP and RSVP-TE S-PMSIs support two types of data thresholds: bandwidth-
driven and receiver-PE-driven. The threshold evaluation and bandwidth driven
threshold functionality are described in Use of Data MDTs.
When a (C-S, C-G) crosses a data threshold to create S-PMSI, instead of regular S-
PMSI signaling, sender PE originates S-PMSI explicit tracking procedures to detect
how many receiver PEs are interested in a given (C-S, C-G). When receiver PEs
receive an explicit tracking request, each receiver PE responds, indicating whether
there are multicast receivers present for that (C-S, C-G) on the given PE (PE is
interested in a given (C-S, C-G)). If the geo-redundancy feature is enabled, receiver
PEs do not respond to explicit tracking requests for suppressed sources and
therefore only Receiver PEs with an active join are counted against the configured
thresholds on Source PEs.
Upon regular sampling and check interval, if the previous check interval had a non-
zero receiver PE count (one interval delay to not trigger S-PMSI prematurely) and
current count of receiver PEs interested in the given (C-S, C-G) is non-zero and is
less than the configured receiver PE add threshold, Source PE will set-up S-PMSI
for this (C-S, C-G) following standard ng-MVPN procedures augmented with explicit
tracking for S-PMSI being established.
Data threshold timer should be set to ensure enough time is given for explicit tracking
to complete (for example, setting the timer to value that is too low may create S-PMSI
prematurely).
The explicit tracking procedures follow RFC6513/6514 with clarification and wildcard
S-PMSI explicit tracking extensions as described in IETF Draft: draft-dolganow-
l3vpn-expl-track-00.
• Upgrade all the PE nodes that need to support MVPN to the newer release.
• The old configuration will be converted automatically to the new style.
• Node by node, MCAST-VPN address-family for BGP is enabled. Enable auto-
discovery using BGP.
• Change PE-to-PE signaling to BGP.
Multi-stream S-PMSI supports a single S-PMSI for one or more IPv4 (C-S, C-G) or
IPv6 (C-S, C-G) streams. Multiple multicast streams starting from the same VPRN
going to the same set of leafs can use a single S-PMSI. Multi-stream S-PMSIs can:
When mapping a multicast stream to a multi-stream S-PMSI policy, the data will
traverse the S-PMSI without first using the I-PMSI. (Before this feature, when a
multicast stream was sourced, the data used the I-PMSI first until a configured
threshold was met. After this, if the multicast data exceeded the threshold, it would
signal the S-PMSI tunnel and switch from I-PMSI to S-PMSI.)
For multi-stream S-PMSI, if the policy is configured and the multicast data matches
the policy rules, the multi-stream S-PMSI is used from the start without using the
default I-PMSI.
The rules for matching a multi-stream S-PMSI on the source node are listed here.
1. The multi-stream S-PMSI policy is evaluated, starting from the lowest numerical
policy index. This allows the feature to be enabled in the service when per-(C-S,
C-G) stream configuration is present. Only entries that are not shut down are
evaluated. First, the multi-stream S-PMSI (the lowest policy index) that the (C-
S, C-G) stream maps to is selected.
2. If (C-S, C-G) does not map to any of multi-stream S-PMSIs, per-(C-S, C-G) S-
PMSIs are used for transmission if a (C-S, C-G) maps to an existing S-PMSI
(based on data-thresholds).
3. If S-PMSI cannot be used, the default PMSI is used.
• If an S-PMSI P-tunnel is not available, then a default PMSI tunnel is used. When
an S-PMSI tunnel fails, all (C-S, C-G) streams using this multi-stream S-PMSI
move to the default PMSI. The groups move back to S-PMSI when the S-PMSI
tunnel is restored.
Notes:
Notes:
• The multi-stream S-PMSI model uses BSR RP co-located with the source PE or
an RP between the source PE and multicast source (upstream of receivers).
Both bsr unicast and bsr spmsi can be deployed (as applicable).
• The model also supports other RP types.
The operator can change the mapping in service; that is, the operator can move
active streams (C-S, C-G) from one S-PMSI to another using the configuration, or
from the default PMSI to the S-PMSI, without having to stop data transmission or
disable a PMSI.
The change is performed by moving a (C-S, C-G) stream from a per-group S-PMSI
to a multi-stream S-PMSI and vice versa, and to moving a (C-S, C-G) stream from
one multi-stream S-PMSI to another multi-stream S-PMSI.
Notes:
• During re-mapping, a changed (C-S, C-G) stream will first be moved to the
default PMSI before it is moved to a new S-PMSI, regardless of the type of
move. Unchanged (C-S, C-G) streams must remain on an existing PMSIs.
• Any change to a multi-stream S-PMSI policy or to a preferred multi-stream S-
PMSI policy (for example, an index change equivalent to less than or equal to
the current policy) might cause a traffic outage. Therefore, it is recommended
that any change to a multi-stream S-PMSI policy be done in a maintenance
window.
In this example, two policies are created on the source node: multi-stream S-PMSI 1
and multi-stream-S-PMSI 10.
A multicast stream with group 224.0.0.x and source 138.120.1.0/24 will map to the
first multi-stream policy. The group in the range of 224.0.1.0/24 and source
138.120.2.0/24 will map to policy 10.
*A:SwSim14>config>service>vprn# info
mvpn
auto-discovery default
c-mcast-signaling bgp
provider-tunnel
inclusive
mldp
no shutdown
exit
exit
selective
mldp
no shutdown
exit
no auto-discovery-disable
data-threshold 225.70.1.0/24 1
data-threshold 230.0.0.0/8 1
multistream-spmsi 1 create
group 224.0.0.0/24
source 138.120.1.0/24
exit
exit
multistream-spmsi 10 create
group 224.0.1.0/24
source 138.120.2.0/24
exit
exit
exit
exit
Multicast extranet is supported for Rosen MVPN with MDT SAFI. Extranet is
supported for IPv4 multicast stream for default and data MDTs (PIM and IGMP).
Rosen MVPN extranet requires routing information exchange between the source
VRF and the receiver VRF based on route export/import policies:
Caveats:
• The source VRF instance and receiver VRF instance of extranet must exist on
a common PE node (to allow local extranet mapping).
• SSM translation is required for IGMP (C-*, C-G).
• An I-PMSI route cannot be imported into multiple VPRNs, and NG-MVPN routes
do not need to be imported into multiple VPRNs.
In Figure 29, VPRN-1 is the source VPRN instance and VPRN-2 and VPRN-3 are
receiver VPRN instances. The PIM/IGMP JOIN received on VPRN-2 or VPRN-3 is
for (S1, G1) multicast flow. Source S1 belongs to VPRN-1. Due to the route export
policy in VPRN-1 and the import policy in VPRN-2 and VPRN-3, the receiver host in
VPRN-2 or VPRN-3 can subscribe to the stream (S1, G1).
VPRN-1
VPRN-2
VPRN-3
PIM Join IGMP Join
(S1, G1) (S1, G1)
PE1
(Source PE)
PIM Join IGMP Join
(S1, G1) (S1, G1)
PIM Join
(S1, G1) (S1, G1) VPRN-1
Multicast extranet is supported for ng-MVPN with IPv4 RSVP-TE and mLDP I-PMSIs
and S-PMSIs including (C-*, C-*) S-PMSI support where applicable. Extranet is
supported for IPv4 C-multicast traffic (PIM/IGMP joins).
Multicast extranet for ng-MVPN, similarly to extranet for Rosen MVPN, requires
routing information exchange between source ng-MVPN and receiver ng-MVPN
based on route export and import policies. Routing information for multicast sources
is exported using an RT export policy from a source ng-MVPN instance and imported
into a receiver ng-MVPN instance using an RT import policy. S-PMSI/I-PMSI
establishment and C-multicast route exchange occurs in a source ng-MVPN P-
instance only (import and export policies are not used for MVPN route exchange).
Sender-only functionality must not be enabled for the source/transit ng-MVPN on the
receiver PE. It is recommended to enable receiver-only functionality on a receiver ng-
MVPN instance.
Caveats:
• Source P-instance MVPN and receiver C-instance MVPN must reside on the
receiver PE (to allow local extranet mapping).
• SSM translation is required for IGMP (C-*, C-G).
Figure 30 Source PE Transit Replication and Receiver PE Per-group SSM Extranet Mapping
Source PE Receiver PE1
S1: G1, G2
Receiver
MVPN PIM (S1, G1) (S2, G1) (S2, G3)
Source Core Core 1000
MVPN MVPN (S1, G1) (S2, G1) MVPN
1 100 100
Receiver
MVPN PIM (S1, G2) (S2, G3)
Source Core Core 2000
MVPN PIM (S1, G2) (S2, G3) MVPN
MVPN
2 200 200
al_0487
For per-group mapping on a receiver PE, the operator must configure a receiver
routing instance MVPN per-group mapping to one or more source/transit core routing
instance MVPNs. The mapping allows propagation of PIM joins received in the
receiver routing instance MVPN into the core routing MVPN instance defined by the
map. All multicast streams sourced from a single multicast source address are
always mapped to a single core routing instance MVPN for a given receiver routing
instance MVPN (multiple receiver MVPNs can use different core MVPNs for groups
from the same multicast source address). If per-group map in receiver MVPN maps
multicast streams sourced from the same multicast source address to multiple core
routing instance MVPNs, then the first PIM join processed for those streams selects
the core routing instance MVPN to be used for all multicast streams from a given
source address for this receiver MVPN. PIM joins for streams sourced from the
source address not carried by the selected core VRF MVPN instance will remain
unresolved. When a PIM join/prune is received in a receiver routing instance MVPN
with per-group mapping configured, if no mapping is defined for the PIM join’s group
address, non-extranet processing applies when resolving how to forward the PIM
join/prune.
The main attributes for per-group SSM extranet mapping on receiver PE include
support for:
• Rosen MVPN with MDT SAFI. * RFC6513/6514 NG-MVPN with IPv4 RSVP-TE/
mLDP in P-instance (a P-instance tunnel must be in the same VPRN service as
multicast source)
• IPv4 PIM SSM
• IGMP (C-S, C-G), and for IGMP (C-*, C-G) using SSM translation
• a receiver routing instance MVPN to map groups to multiple core routing
instance MVPNs
• in-service changes of the map to a different transit/source core routing instance
(this is service affecting)
Caveats:
IGMP Join
PIM Join IGMP Join
(S1, G1) or (*, G1)
(S1, G1) (S1, G1)
(S2, G1)
al_0486
Routing information is exchanged between GRT and VRF receiver MVPN instances
of extranet by enabling grt-extranet under a receiver MVPN PIM configuration for all
or a subset of multicast groups. When enabled, multicast receivers in a receiver
routing instance(s) can subscribe to streams from any multicast source node
reachable in GRT source instance.
Multicast extranet with per-group mapping for PIM ASM allows multicast traffic to
flow from a source routing instance to a receiver routing instance when a PIM ASM
join is received in the receiver routing instance.
1. Required:
- Anycast RP mesh between PEs Receiver
in each Source Instance VRF 1
Receiver PIM/IGMP for (C-*, C-G1)
Source VRF 2 & (C-*, C-G2)
Inst.
Receiver PIM/IGMP for (C-*, C-G2)
PIM (C-2, C-G2) VRF 1
(C-SL, C-G1) Source/ Source
Receiver PE2 PIM (C-2,
C-G2) Inst. 1
Source
Inst. 2
Receiver PE3
Source
Inst. 2
PIM ASM extranet is achieved by local mapping from the receiver to source routing
instances on a receiver PE. The mapping allows propagation of anycast RP PIM
register messages between the source and receiver routing instances over an auto-
created internal extranet interface. This PIM register propagation allows the receiver
routing instance to resolve PIM ASM joins to multicast source(s) and to propagate
PIM SSM joins over an auto-created extranet interface to the source routing
instance. PIM SSM joins are then propagated towards the multicast source within the
source routing instance.
• Rosen MVPN with MDT SAFI: a local replication on a source PE and multiple-
source/multiple-receiver replication on a receiver PE
• RFC 6513/6514 NG-MVPN (including RFC 6625 (C-*, C-*) wildcard S-PMSI): a
local replication on a source PE and a multiple source/multiple receiver
replication on a receiver PE
• Extranet for GRT-source/VRF receiver with a local replication on a source PE
and a multiple-receiver replication on a receiver PE
• Locally attached receivers are supported without SSM mapping.
• an anycast RP mesh between source and receiver PEs in the source routing
instance
Caveats:
Multicast BGP can be used to advertise separate multicast routes using Multicast
NLRI (SAFI 2) on PE-CE link within VPRN instance. Multicast routes maintained per
VPRN instance can be propagated between PE-PE using BGP Multicast-VPN NLRI
(SAFI 129).
Figure 33 Incongruent Multicast and Unicast Topology for Non-Overlapping Traffic Links
PE4
VPN-Multicast Multicast
SAFI-129 SAFI-2
PIM Join
(S1, G1)
PIM Join
(S1, G1)
Unicast
VPN-Unicast
PE3
al_0074
SR OS supports option to perform RPF check per VPRN instance using multicast
route table, unicast route table or both.
Auto-RP is supported for IPv4 in multicast VPNs and in the global routing instance.
Either BSR or auto-RP for IPv4 can be configured; the two mechanisms cannot be
enabled together. BSR for IPv6 and auto-RP for IPv4 can be enabled together. In a
multicast VPN, auto-RP cannot be enabled together with sender-only or receiver-
only multicast distribution trees (MDTs), or wildcard S-PMSI configurations that could
block flooding.
Cust 1
CE1
VRF-1
PE1
CE2
Cust 2
IPv4 MPLS
VRF-2
VRF-2 Cust 2
VRF-1 CE6
CE3 PE2 PE4
Cust 1
VRF-1
PE3
VRF-2
CE4 CE5
Cust 1 Cust 2
al_0168
Senders
PE1:UMH PE2:UMH Solution Outline
MVPN Core Diversity
Core Core Core
• Rosen MVPN with MDT SAFI
Core
M-VRF M-VRF M-VRF M-VRF • 2 non-default IGP instances (in addition to
default system IP instance)
Core Diversity • Core M-VRF:
OSPF Instance 2 MDT MDT
– Multicast:
- Source-address configuration for
Core Core Core Core Core Diversity MDT-SAFI auto-discovery
M-VRF M-VRF M-VRF M-VRF OSPF Instance 1 - BGP policy for BGP NH advertisements
- GRE tunnels using non-system
IP address
PE3 PE4 – Unicast:
Receivers - LDP/RSVP
al_0485
Core diversity allows operator to optionally deploy multicast MVPN in either default
IGP instance or one of two non-default IGP instances to provide, for example,
topology isolation or different level of services. The following describes main feature
attributes:
1. Rosen MVPN IPv4 multicast with MDT SAFI is supported with default and data
MDTs.
2. Rosen MVPN can use a non-default OSPF or ISIS instance (using their
loopback addresses instead of a system address).
On source PEs (PE1: UMH, PE2: UMH in Figure 35), an MVPN is assigned to a non-
default IGP core instance as follows:
The above configuration ensures that MDT SAFI and IPVPN routes for the non-
default core instance use non-default IGP loopback instead of system IP. This
ensures PIM advertisement/joins run in the proper core instance and GRE tunnels
for multicast can be set-up using and terminated on non-system IP.
If BGP export policy is used to change unicast route next-hop VPN address, unicast
traffic must be forwarded in non-default “red” or “blue” core instance LDP or RSVP
(terminating on non-system IP) must be used. GRE unicast traffic termination on
non-system IP is not supported, and any GRE traffic arriving at the PE in “blue”, “red”
instances destined to non-default IGP loopback IP will be forwarded to CPM (ACL or
CPM filters can be used to prevent the traffic from reaching the CPM). This limitation
does not apply if BGP connector attribute is used to resolve the multicast route.
Feature caveats:
Operators can configure a list of prefixes for multicast source redundancy per MVPN
on Receiver PEs:
A Receiver PE selects a single, most-preferred multicast source from the list of pre-
configured sources for a given MVPN during (C-*, C-G) processing as follows:
• A single join for the group is sent to the most preferred multicast source from the
operator-configured multicast source list. Joins to other multicast sources for a
given group are suppressed. Operator can see active and suppressed joins on
a Receiver PE. Although a join is sent to a single multicast source only, (C-S, C-
G) state is created for every source advertising Type-5 S-A route on the
Receiver PE.
• The most preferred multicast source is a reachable source with the highest local
preference for Type-5 SA route based on the BGP policy, as described later in
this section.
• On a failure of the preferred multicast source or when a new multicast source
with a better local preference is discovered, Receiver PE will join the new most-
preferred multicast source. The outage experienced will depend on how quickly
Receiver PE receives Type-5 S-A route withdrawal or looses unicast route to
multicast source, and how quickly the network can process joins to the newly
selected preferred multicast source(s).
• Local multicast sources on a Receiver PE are not subject to the most-preferred
source selection, regardless of whether they are part of redundant source list or
not.
Operators can change redundant source list or BGP policy affecting source selection
in service. If such a change of the list/policy results in a new preferred multicast
source election, make-before-break is used to join the new source and prune the
previously best source.
For the proper operations, MVPN multicast source geo-redundancy requires the
router:
Senders
PE1:UMH PE2:UMH
Solution Outline
• MVPN Core Diversity
Core Core Core Core - Roden MVPN with MDT SAFI
M-VRF M-VRF M-VRF M-VRF
- 2 Non-default IGP Instances
(In Addition to Default System IP Instance)
- Core M-VRF:
- Multicast:
- Source-address Configuration for
PE3 PE4 MDT-SAFI Auto-discovery
Receivers
- BGP Policy for BGP NH Advertisements
- GRE Tunnels Using Non-system
IP Address
- Unicast:
- LDP/RSVP
al_0379
Core diversity allows operators to optionally deploy multicast MVPN in either default
IGP instance. or one of two non-default IGP instances to provide; for example,
topology isolation or different level of services. The following describes the main
feature attributes:
• Rosen MVPN IPv4 multicast with MDT SAFI is supported with default and data
MDTs.
• Rosen MVPN can use a non-default OSPF or ISIS instance (using their
loopback addresses instead of a system address).
• Up to 3 distinct core instances are supported: system + 2 non-default OSPF
instances shown in Figure 37.
• The BGP Connector also uses non-default OSPF loopback as NH, allowing
Inter-AS Option B/C functionality to work with Core diversity as well.
• The feature is supported with CSC-VPRN.
On source PEs (PE1: UMH, PE2: UMH in the above picture), an MVPN is assigned
to a non-default IGP core instance as follows:
• MVPN is statically pointed to use one of the non-default IGP instances loopback
addresses as source address instead of system loopback IP.
• MVPN export policy is used to change unicast route next-hop VPN address.
• BGP Connector support for non-default instances.
The configuration shown above ensures that MDT SAFI and IPVPN routes for the
non-default core instance use non-default IGP loopback instead of system IP. This
ensures PIM advertisement/joins run in the proper core instance and GRE tunnels
for multicast can be set-up using and terminated on non-system IP. If BGP export
policy is used to change unicast route next-hop VPN address instead of BGP
Connector attribute-based processing and unicast traffic must be forwarded in non-
default core instances 1 or 2, LDP or RSVP (terminating on non-system IP) must be
used. GRE unicast traffic termination on non-system IP is not supported and any
GRE traffic arriving at the PE in instances 1 or 2, destined to non-default IGP
loopback IP will be forwarded to CPM (ACL or CPM filters can be used to prevent the
traffic from reaching the CPM).
To support Inter-AS option for MVPNs, a mechanism is required that allows setup of
Inter-AS multicast tree across multiple ASes. Due to limited routing information
across AS domains, it is not possible to setup the tree directly to the source PE. Inter-
AS VPN Option A does not require anything specific to inter-AS support as customer
instances terminate on ASBR and each customer instance is handed over to the
other AS domain via a unique instance. This approach allows operators to provide
full isolation of ASes, but the solution is the least scalable case, as customer
instances across the network have to exist on ASBR.
Inter-AS MVPN Option B allows operators to improve upon the Option A scalability
while still maintaining AS isolation, while Inter-AS MVPN Option C further improves
Inter-AS scale solution but requires exchange of Inter-AS routing information and
thus is typically deployed when a common management exists across all ASes
involved in the Inter-AS MVPN. The following sub-sections provide further details on
Inter-AS Option B and Option C functionality.
With Inter-AS MVPN Option B, BGP next-hop is modified by local and remote ASBR
during re-advertisement of VPNv4 routes. On BGP next-hop change, information
regarding the originator of prefix is lost as the advertisement reaches the receiver PE
node.
In case of Inter-AS MVPN Option B, routing information towards the source PE is not
available in a remote AS domain, since IGP routes are not exchanged between
ASes. Routers in an AS other than that of a source PE, have no routes available to
reach the source PE and thus PIM JOINs would never be sent upstream. To enable
setup of MDT towards a source PE, BGP next-hop (ASBR) information from that PE's
MDT-SAFI advertisement is used to fake a route to the PE. If the BGP next-hop is a
PIM neighbor, the PIM JOINs would be sent upstream. Otherwise, the PIM JOINs
would be sent to the immediate IGP next-hop (P) to reach the BGP next-hop. Since
the IGP next-hop does not have a route to source PE, the PIM JOIN would not be
propagated forward unless it carried extra information contained in RPF Vector.
In case of Inter-AS MVPN Option C, unicast routing information towards the source
PE is available in a remote AS PEs/ASBRs as BGP 3107 tunnels, but unavailable at
remote P routers. If the tunneled next-hop (ASBR) is a PIM neighbor, the PIM JOINs
would be sent upstream. Otherwise, the PIM JOINs would be sent to the immediate
IGP next-hop (P) to reach the tunneled next-hop. Since the IGP next-hop does not
have a route to source PE, the PIM JOIN would not be propagated forward unless it
carried extra information contained in RPF Vector.
To enable setup of MDT towards a source PE, PIM JOIN thus carries BGP next hop
information in addition to source PE IP address and RD for this MVPN. For option-B,
both these pieces of information are derived from MDT-SAFI advertisement from the
source PE. For option-C, both these pieces of information are obtained from the BGP
tunneled route.
The RPF vector is added to a PIM join at a PE router when configure router pim rpfv
option is enabled. P routers and ASBR routers must also have the option enabled to
allow RPF Vector processing. If the option is not enabled, the RPF Vector is dropped
and the PIM JOIN is processed as if the PIM Vector were not present.
For further details about RPF Vector processing please refer to [RFCs 5496, 5384
and 6513]
Inter-AS Option B is supported for Rosen MVPN PIM SSM using BGP MDT SAFI,
PIM RPF Vector and BGP Connector attribute. The Figure 38 depict set-up of a
default MDT:
AS65000 AS65001
VPRN-1 VPRN-1
CE1 CE2
PE1 P1 ASBR1 ASBR2 P2 PE2
4. P-PIM Join 5. P-PIM Join 6. P-PIM Join 7. P-PIM Join 8. P-PIM Join
Default SSMMD T: Default SSMMD T: Default SSMMD T: Default SSMMD T: Default SSMMD T:
• PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1
• RD: PE2: VPRN1 • RD: PE2: VPRN1 • RD: PE2: VPRN1 • RPF Vector Stripped
• RPF Neighbor: P1 • RPF Neighbor: ASBR1 • RPF Neighbor: ASBR2
• RPF Vector: ASBR1 • RPF Vector: ASBR1 • RPF Vector: ASBR2
al_0165
• RFC 5384 - The Protocol Independent Multicast (PIM) Join Attribute Format
• RFC 5496 - The Reverse Path Forwarding (RPF) Vector TLV
• RFC 6513 - Multicast in MPLS/BGP IP VPNs
Inter-AS Option C is supported for Rosen MVPN PIM SSM using BGP MDT SAFI
and PIM RPF Vector. Figure 39 depicts a default MDT setup:
RR1 RR2
AS65000 AS65001
VPRN-1 VPRN-1
CE1 CE2
PE1 P1 ASBR1 ASBR2 P2 PE2 (C-S1,
C-G1)
4. P-PIM Join 5. P-PIM Join 6. P-PIM Join 7. P-PIM Join 8. P-PIM Join
Default SSMMD T: Default SSMMD T: Default SSMMD T: Default SSMMD T: Default SSMMD T:
• PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1 • PE2, MDT-G1
• RD: PE2: VPRN1 • RD: PE2: VPRN1 • RD: PE2: VPRN1 • RPF Vector Stripped
• RPF Neighbor: P1 • RPF Neighbor: ASBR1 • RPF Neighbor: ASBR2
• RPF Vector: ASBR1 • RPF Vector: ASBR1 • RPF Vector: ASBR2
al_0148
Additional caveats for Inter-AS MVPN Option B and C support are the following:
When configure router pim rpfv mvpn option is enabled, Cisco routers need to be
configured to include RD in an RPF vector using the following command: ip
multicast vrf vrf-name rpf proxy rd vector for interoperability. When Cisco routers
are not configured to include RD in an RPF vector, operator should configure SR OS
router (if supported) using configure router pim rpfv core mvpn: PIM joins
received can be a mix of core and mvpn RPF vectors.
This feature allows multicast services to use segmented protocols and span them
over multiple autonomous systems (ASs), as done in unicast services. As IP VPN or
GRT services span multiple IGP areas or multiple ASs, either due to a network
designed to deal with scale or as result of commercial acquisitions, operators may
require Inter-AS VPN (unicast) connectivity. For example, an Inter-AS VPN can
break the IGP, MPLS and BGP protocols into access segments and core segments,
allowing higher scaling of protocols by segmenting them into their own islands.
SR OS also allows for similar provision of multicast services and for spanning these
services over multiple IGP areas or multiple ASs.
For multicast VPN (MVPN), SR OS previously supported Inter-AS Model A/B/C for
Rosen MVPN; however, when MPLS was used, only Model A was supported for Next
Generation Multicast VPN (NG-MVPN) and d-mLDP signaling.
For unicast VPRNs, the Inter-AS or Intra-AS Option B and C breaks the IGP, BGP
and MPLS protocols at ABR routers (in case of multiple IGP areas) and ASBR
routers (in case of multiple ASs). At ABR and ASBR routers, a stitching mechanism
of MPLS transport is required to allow transition from one segment to next, as shown
in Figure 40 and Figure 41.
In Figure 40, the Service Label (S) is stitched at the ASBR routers.
LDP LDP
IGP IGP
Source
SR Core AS3 SR Core AS1
VPRN-1 VPRN-1
RD 600:600 RD 60:60
Payload Payload
Payload
Payload S1 S3 Payload
S2
TL1 TL3
Swap S1 to S2 Swap S2 to S3
1022
In Figure 41, the 3107 BGP Label Route (LR) is stitched at ASBR1 and ASBR3. At
ASBR1, the LR1 is stitched with LR2, and at ASBR3, the LR2 is stitched with TL2.
LDP LDP
IGP IGP
Source
SR Core AS3 SR Core AS1
VPRN-1 VPRN-1
RD 600:600 RD 60:60
Payload
Payload Payload
S1
Payload S1 S1 Payload
LR1
LR2 TL2
TL1
Swap LR1 to LR2 Swap LR2 to LR3
1023
Note: For unicast VPNs, it was usually preferred to only have eBGP between ASBR routers.
The non-segmented behavior of d-mLDP would have broken this by requiring LDP signaling
between ASBR routers.
SR OS now has d-mLDP non-segmented intra-AS and inter-AS signaling for NG-
MVPN and GRT multicast. The non-segmented solution for d-mLDP is possible for
inter-ASs as Option B and C.
• Inter-AS Option A
• Inter-AS Option B
• Inter-AS Option C
Options B and C use recursive opaque types 8 and 7 respectively, from Table 36.
Inter-AS Option A
In Inter-AS Option A, ASBRs communicate using VPN access interfaces, which need
to be configured under PIM for the two ASBRs to exchange multicast information.
Inter-AS Option B
The recursive opaque type used for Inter-AS Option B is the Recursive Opaque (VPN
Type), shown as opaque type 8 in Table 36.
In Inter-AS Option B, the PEs in two different ASs do not have their system IP
address in the RTM. As such, for NG-MVPN, a recursive opaque value in mLDP FEC
is required to signal the LSP to the first ASBR in the local AS path.
Because the system IPs of the peer PEs (Root-1 and Root-2) are not installed on the
local PE (leaf), it is possible to have two PEs in different ASs with same system IP
address, as shown in Figure 42. However, SR OS does not support this topology.
The system IP address of all nodes (root or leaf) in different ASs must be unique.
Host Source
VPRN-1 VPRN-1
RD 600:600 RD 60:60
Join S1, G1 S1, G1
Join S2, G2 10.60.3.2, 230.0.0.60
PE-3
(ROOT-2)
100.0.0.14
SR Core AS2
VPRN-1 Source
RD 70:70
S2, G2
10.61.3.2, 230.0.0.61
1028
For inter-AS Option B and NG-MVPN, SR OS as a leaf does not support multiple
roots in multiple ASs with the same system IP and different RDs; however, the first
root that is advertised to an SR OS leaf will be used by PIM to generate an MLDP
tunnel to this actual root. Any dynamic behavior after this point, such as removal of
the root and its replacement by a second root in a different AS, is not supported and
the SR OS behavior is nondeterministic.
I-PMSI and S-PMSI functionality follows RFC 6513 section 8.1.1 and RFC 6512
sections 3.1 and 3.2.1. For routing, the same rules as for GRT d-mLDP use case
apply, but the VRR Route Import External community now encodes the VRF instance
in the local administrator field.
Option B uses an outer opaque of type 8 and inter opaque of type 1 (see Table 36).
Figure 43 depicts the processing required for I-PMSI and S-PMSI Inter-AS
establishment.
S1, G1
Join S1, G1 10.60.3.2, 230.0.0.60
Host
VPRN-1 VPRN-1 Source
RD 600:600 RD 60:60
BGP update: Network info BGP update: Network info BGP update: Network info
RD:60:60 NH 100.0.0.21 RD:60:60 NH 100.0.0.2 RD:60:60 NH 100.0.0.14
FEC: ROOT 100.0.0.14 FEC: ROOT 100.0.0.14 FEC: ROOT 100.0.0.14
opaque: P2MPID 8193 opaque: P2MPID 8193 opaque: P2MPID 8193
LDP Fec: Root 100.0.0.21 LDP Fec: Root 100.0.0.2 LDP Fec:
Opaque <RD 60:60, Opaque <RD 60:60, Root 100.0.0.14
Root 100.0.0.14, Root 100.0.0.14, Opaque <PMPID 8193>
Opaque <PMPID 8193>> Opaque <PMPID 8193>>
ASBR-1:
1. Can NOT be a PE/ROOT NODE
For non-segmented mLDP trees, A-D procedures follow those of the Intra-AS model,
with the exception that NO EXPORT community must be excluded; LSP FEC
includes mLDP VPN-recursive FEC.
On a receipt of an Intra-AS PMSI A-D route, PE2 resolves PE1’s address (next-hop
in PMSI route) to a labeled BGP route with a next-hop of ASBR3, because PE1
(Root-1) is not known via IGP. Because ASBR3 is not the originator of the PMSI
route, PE2 sources an mLDP VPN recursive FEC with a root node of ASBR3, and
an opaque value containing the information advertised by Root-1 (PE-1) in the PMSI
A-D route, shown below, and forwards the FEC to ASBR 3 using IGP.
PE-2 LEAF FEC: (Root ASBR3, Opaque value {Root: ROOT-1, RD 60:60,
Opaque Value: P2MPLSP-ID xx}}
When the mLDP VPN-recursive FEC arrives at ASBR3, it notes that it is the identified
root node, and that the opaque value is a VPN-recursive opaque value. Because
Root-1 PE1 Is not known via IGP, ASBR3 resolves the root node of the VPN-
Recursive FEC using PMSI A-D (I or S) matching the information in the VPN-
recursive FEC (the originator being PE1 (Root-1), RD being 60:60, and P2MP LSP
ID xx). This yields ASBR1 as next hop. ASBR3 creates a new mLDP FEC element
with a root node of ASBR1, and an opaque value being the received recursive
opaque value, as shown below. ASBR then forwards the FEC using IGP.
ASBR-3 FEC: {Root ASBR 1, Opaque Value {Root: ROOT-1, RD 60:60, Opaque
Value: P2MPLSP-ID xx}}
When the mLDP FEC arrives at ASBR1, it notes that it is the root node and that the
opaque value is a VPN-recursive opaque value. As PE1’s ROOT-1 address is known
to ASBR1 through the IGP, no further recursion is required. Regular processing
begins, using received Opaque mLDP FEC information.
Note: VPN-Recursive FEC carries P2MPLSP ID. The P2MPLSP ID is used in addition to
PE RD and Root to select a route to the mLDP root using the correct I-PMSI or S-PMSI
route.
The functionality as described above for I-PMSI applies also to S-PMSI and (C-*, C-*)
S-PMSI.
C-multicast route processing functionality follows RFC 6513 section 8.1.2 (BGP used
for route exchange). The processing is analogous to BGP Unicast VPN route
exchange described in Figure 40 and Figure 41. Figure 44 shows C-multicast route
processing with non-segmented mLDP PMSI details.
S1, G1
Join S1, G1 10.60.3.2, 230.0.0.60
Host
VPRN-1 VPRN-1 Source
RD 600:600 RD 60:60
BGP update: Network info BGP update: Network info BGP update: Network info
RD:60:60 NH 100.0.0.21 RD:60:60 NH 100.0.0.2 RD:60:60 NH 100.0.0.14
FEC: ROOT 100.0.0.14 FEC: ROOT 100.0.0.14 FEC: ROOT 100.0.0.14
opaque: P2MPID 8193 opaque: P2MPID 8193 opaque: P2MPID 8193
1032
Inter-AS Option C
In Inter-AS Option C, the PEs in two different ASs have their system IP address in
the RTM, but the intermediate nodes in the remote AS do not have the system IP of
the PEs in their RTM. As such, for NG-MVPN, a recursive opaque value in mLDP
FEC is need to signal the LSP to the first ASBR in the local AS path.
The recursive opaque type used for Inter-AS Option B is the Recursive Opaque
(Basic Type), shown as opaque type 7 in Table 36.
For Inter-AS Option C, on a leaf PE, a route exists to reach root PE’s system IP and,
as ASBRs can use BGP unicast routes, recursive FEC processing using BGP
unicast routes and not VPN recursive FEC processing using PMSI routes is required.
I-PMSI and S-PMSI functionality follows RFC 6513 section 8.1.1 and RFC 6512
Section 2. The same rules as per the GRT d-mLDP use case apply, but the VRR
Route Import External community now encodes the VRF instance in the local
administrator field.
Figure 45 shows the processing required for I-PMSI and S-PMSI Inter-AS
establishment.
1029
For non-segmented mLDP trees, A-D procedures follow those of the Intra-AS model,
with the exception that NO EXPORT Community must be excluded; LSP FEC
includes mLDP recursive FEC (and not VPN recursive FEC).
• A-D routes are not installed by ASBRs and next-hop information is not changed
in MVPN A-D routes.
• BGP-labeled routes are used to provide inter-domain connectivity on remote
ASBRs.
On a receipt of an Intra-AS I-PMSI A-D route, PE2 resolves PE1’s address (N-H in
PMSI route) to a labeled BGP route with a next-hop of ASBR3, because PE1 is not
known via IGP. PE2 sources an mLDP FEC with a root node of ASBR3, and an
opaque value, shown below, containing the information advertised by PE1 in the
I-PMSI A-D route.
PE-2 LEAF FEC: {Root = ASBR3, Opaque Value: {Root: ROOT-1, Opaque
Value: P2MP-ID xx}}
When the mLDP FEC arrives at ASBR3, it notes that it is the identified root node, and
that the opaque value is a recursive opaque value. ASBR3 resolves the root node of
the Recursive FEC (ROOT-1) to a labeled BGP route with the next-hop of ASBR1,
because PE-1 is not known via IGP. ASBR3 creates a new mLDP FEC element with
a root node of ASBR1, and an opaque value being the received recursive opaque
value.
ASBR3 FEC: {Root: ASBR1, Opaque Value: {Root: ROOT-1, Opaque Value:
P2MP-ID xx}}
When the mLDP FEC arrives at ASBR1, it notes that it is the root node and that the
opaque value is a recursive opaque value. As PE-1’s address is known to ASBR1
through the IGP, no further recursion is required. Regular processing begins, using
the received Opaque mLDP FEC information.
The functionality as described above for I-PMSI applies to S-PMSI and (C-*, C-*) S-
PMSI.
C-multicast route processing functionality follows RFC 6513 section 8.1.2 (BGP used
for route exchange). The processing is analogous to BGP Unicast VPN route
exchange. Figure 46 shows C-multicast route processing with non-segmented
mLDP PMSI details.
100.0.0.21 100.0.0.2
BGP-C-Multicast 6/7:
NLRI: A/S1:5, NH 100.0.0.8 RD:
600:600, s=C-S, g=C-G or * S1, G1
Join S1, G1 Source AS: 3 10.60.3.2, 230.0.0.60
Ext Com: RT 60:60,
IPv4 RT: 100.0.0.14
1030
Caution: The SR OS ASBR does not currently support receiving a non-recursive opaque
FEC (opaque type 1).
The LEAF (PE-2) has to have the ROOT-1 system IP installed in RTM via BGP. If the
ROOT-1 is installed in RTM via IGP, the LEAF will not generate the recursive opaque
FEC. As such, the ASBR 3 will not process the LDP FEC correctly.
Policy is required for a root or leaf PE for removing the NO_EXPORT community
from MVPN routes, which can be configured using an export policy on the PE.
The following is an example for configuring a policy on PEs to remove the no-export:
*A:Dut-A>config>router>policy-options# info
----------------------------------------------
community "no-export" members "no-export"
policy-statement "remNoExport"
default-action accept
community remove "no-export"
exit
exit
----------------------------------------------
*A:Dut-A>config>router>policy-options#
The following is an example for configuring in BGP the policy in a global, group, or
peer context:
*A:Dut-A>config>router>bgp# info
----------------------------------------------
vpn-apply-export
export "remNoExport"
Refer to the “Inter-AS Non-segmented MLDP” section of the MPLS Guide for more
information.
3.5.15.5.4 ECMP
Refer to the “ECMP” section of the MPLS Guide for more information about ECMP.
3.5.16 Weighted ECMP and ECMP for VPRN IPv4 and IPv6
over MPLS LSPs
ECMP over MPLS LSPs for VPRN services refers to spraying packets across
multiple named RSVP LSPs within the same ECMP set.
The ECMP-like spraying consists of hashing the relevant fields in the header of a
labeled packet and selecting the next-hop tunnel based on the modulo operation of
the output of the hash and the number of ECMP tunnels. The maximum number of
ECMP tunnels selected from the TTM matches the value of the user-configured
ecmp option. Only LSPs with the same lowest LSP metric can be part of the ECMP
set. If the number of such LSPs is higher than the value configured in the ecmp
option, the LSPs with the lowest tunnel IDs are selected first.
In weighted ECMP, the load balancing weight of the LSP is normalized by the system
and then used to bias the amount of traffic forwarded over each LSP. The weight of
the LSP is configured using the config>router>mpls>lsp>load-balancing-weight
weight and config>router>mpls>lsp-template>load-balancing-weight weight
commands.
Weighted ECMP is configured for VPRN services with SDP auto-bind by using the
config>service>vprn>auto-bind-tunnel>ecmp max-ecmp-routes and
config>service>vprn>auto-bind-tunnel>weighted-ecmp commands. Weighted
ECMP is disabled by default.
The rib-priority command can be configured within the VPRN instance of the OSPF
or IS-IS routing protocols. For OSPF, a prefix list can be specified that identifies
which route prefixes should be considered high priority. If the rib-priority high
command is configured under an VPRN>OSPF>area>interface context then all
routes learned through that interface is considered high priority. For the IS-IS routing
protocol, RIB prioritization can be either specified though a prefix-list or an IS-IS tag
value. If a prefix list is specified than route prefixes matching any of the prefix list
criteria will be considered high priority. If instead an IS-IS tag value is specified then
any IS-IS route with that tag value will be considered high priority.
The routes that have been designated as high priority will be the first routes
processed and then passed to the FIB update process so that the forwarding engine
can be updated. All known high priority routes should be processed before the
routing protocol moves on to other standard priority routes. This feature will have the
most impact when there are a large number of routes being learned through the
routing protocols.