EANTC InteropTest2023 TestReport
EANTC InteropTest2023 TestReport
EANTC InteropTest2023 TestReport
2
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
3
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Ciena 5166
Cisco 8201-24H8FH
8201-32FH
ASR 9901
ASR 9902
ASR 9903
Crosswork
NCS 540-24Q8L2DD
NCS 540-28Z4C
NCS 540X-12Z16G
NCS 540X-16Z4G8Q2C
NCS 57B1-5DSE
NCS 57C1-48Q6
IOS XRd
4
Topology Multi-Vendor MPLS SDN Interoperability Test Report 2023
Juniper Nokia
Cisco Huawei Keysight
Paragon Pathfinder Network Services
Crosswork NCE-IP IxNetwork
Platform
Arista
Cisco XRd Juniper Cisco
7280R
MX204 ASR 9902
Calnex Sentry
5
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
EVPN We performed the test once with all the vendors' devic-
es participating simultaneously in a mixture of single-
EVPN (Ethernet Virtual Private Network) was initially homed and multi-homed devices. The second run con-
conceived as a BGP-based Layer 2 VPN technology sisted of multi-homed multi-vendor devices.
that provides a scalable and efficient way of extending
In this test, we verified the establishment of ISIS and
Layer 2 domains over a WAN (Wide Area Network).
BGP sessions and the EVPN signaling. In the next step,
Over time EVPN became the de-facto VPN standard,
we observed the DF and non-DF PEs in single-active
not only for Layer 2 VPNs but also for Layer 3, mul-
multi-homed devices and flow-based traffic load balanc-
ticast, and other advanced VPN services.
ing in all-active multi-homed devices. Then, we verified
EVPN has become increasingly popular in data centers zero packet loss during Any-to-Any bidirectional unicast
as it provides a mechanism for distributing MAC traffic generation, and lastly, we measured the link fail-
(Media Access Control) addresses across the network, over and link recovery out of service time.
which is essential for efficient and flexible VM (Virtual
Machine) creation and mobility. EVPN also enables
network administrators to create tenant-specific virtual
networks that can span multiple data centers, making it
an ideal solution for multi-tenant environments.
EVPN supports advanced features such as Network
slicing, fast convergence, load balancing, and multi-
path forwarding. These features are critical for provid-
ing high availability and efficient use of network re-
sources in data centers and 5G networks. Network
slicing as a key feature of 5G networks, enables net-
work operators to create multiple virtual networks, each
with its own characteristics and service levels, on a
single physical infrastructure.
1 Cisco NCS 540X-12Z16G with Arista 7280R2 Huawei NetEngine 8000 F8 with Ericsson 6273
2 Cisco NCS 540X-12Z16G with Arista 7280R2 Huawei NetEngine 8000 F8 with Nokia 7750 SR-1
3 Arista 7280R2 with Nokia 7750 SR-1 Huawei NetEngine 8000 F8 with Ribbon NPT-2100A
4 Arista 7280R2 with Juniper MX204 Huawei NetEngine 8000 F8 with Ribbon NPT-2100A
5 Arista 7280R2 with Juniper ACX7100-32C Huawei NetEngine 8000 F8 with Ribbon NPT-2100A
6
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
7
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Figure 3: VLAN-based
Figure 4: VLAN-based
Symmetric IRB—single-homed
Symmetric IRB—multi-homed
The following Vendors participated successfully in this
The following Vendors participated successfully in this
test case :
test case :
Single-homed PEs: Arista 7050X3, Arista 7280R3,
Multi-homed PEs: Arista 7050X3, Arista 7280R3,
Aruba CX8325, Aruba CX8360, Aruba CX10000,
Juniper ACX7100-32C, Juniper PTX10001, Juniper
Juniper QFX5120, Juniper QFX5130, Keysight IxNet-
QFX5120, Juniper QFX5130
work, Nokia 7750 SR-1, Spirent-STC
CE: Arista 7050SX, Juniper QFX5110
CE: Arista 7050SX, Juniper QFX5110
Traffic generator: Keysight IxNetwork
Traffic generator: Keysight IxNetwork, Spirent-STC
8
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Figure 5: VLAN-aware-bundle
Symmetric IRB—single-homed
Figure 6: VLAN-aware-bundle
The following Vendors participated successfully in this
Symmetric IRB—multi-homed
test case :
Single-homed PEs: Arista 7050X3, Arista 7280R3, The following Vendors participated successfully in this
Aruba CX8325, Aruba CX8360, Aruba CX10000, test case :
Juniper QFX5120, Juniper QFX5130, Keysight IxNet- Multi-homed PEs: Arista 7050X3, Arista 7280R3, Juni-
work, Spirent-STC per ACX7100-32C, Juniper PTX10001, Juniper
CE: Arista 7050SX, Juniper QFX5110, Traffic genera- QFX5120, Juniper QFX5130
tor: Keysight IxNetwork, Spirent-STC CE: Arista 7050SX, Juniper QFX5110, Traffic genera-
tor: Keysight IxNetwork, Spirent-STC
9
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Figure 7: VLAN-Based Symmetric IRB Route Type-5 Figure 8: VLAN-Based Symmetric IRB VPNv4 Route
The following Vendors participated successfully in this The following Vendors participated successfully in this
test case : test case :
Multi-homed PE devices: Arista 7280R2, Arista Multi-homed PE devices: Arista 7280R2, Arista
7280R3, Cisco NCS 540X-12Z16G with Cisco ASR 7280R3, and Cisco NCS 540X-12Z16G with Cisco
9903, and Juniper MX204 with Juniper ACX7100-32C ASR 9903
Single-homed PE devices: Arrcus UfiSpace S9600- Single-homed PE devices: Cisco NCS 57C1-48Q6,
72XC, Cisco NCS 57C1-48Q6, Huawei NetEngine Huawei NetEngine 8000 F8, Ericsson 6273, Juniper
8000 F8, and Nokia 7750 SR-1 MX204, and Ribbon NPT-2100A
CE: Arista 7050SX3, Traffic Generator: Spirent-STC CE: Arista 7050SX3, Traffic Generator: Spirent-STC
10
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
MAC Mobility
Nowadays, data centers' standard maintenance in-
cludes but is not limited to VM creation, mitigation,
deletion, etc. However, VM mitigation may move from
one Ethernet segment to another, which causes the VM
to be out of service. It causes high out-of-service time if
everything relies on the manual prevision from network
administrators. Therefore, RFC 7432 introduces a se-
quence-number-based BGP EVPN MAC mobility ex-
tended community to solve the issue. Once a MAC ad-
dress appears in the network, the sequence number is
0. And when it moves to a new Ethernet segment, the
sequence number will increase by 1 and send along
with RT-2. All the RT-2 receivers will update their route
accordingly and keep only the highest sequence num-
ber as the final target of the MAC address.
The test tool simulated a fixed IP and MAC addresses
combination for the first DUT, then moved to all DUTs
Figure 9: VLAN-Based Symmetric IRB individually under the same VLAN. We verified that
Route VPNv4 Multi-Vendor once the MAC address was transferred to a new DUT,
the sequence number was increased by 1, and the RT-2
Multi-homed PE devices: Arista 7280R3 with Cisco update was sent out. All other DUTs had their route
NCS 57C1-48Q6 in Site A and Ribbon NPT-2100A table updated accordingly.
with Huawei NetEngine 8000 F8 in Site B
11
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Centralized L3 Gateway
The previous IRB test case was a distributed Layer 3
(L3) gateway deployment, meaning all the DUTs were
Layer 2 (L2) and 3 gateways of the EVPN. In this test,
we tested a centralized L3 gateway deployment, which
implies that L2 bridgings and L3 routings are split into
different DUTs, as shown in figure 13. Once the L2
VTEP receives known bridging unicast traffic, the L2
VTEP will establish VXLAN tunnel directly to the destina-
tion L2 VTEP instead of forwarding the traffic to central-
ized L3 gateway. The centralized L3 gateway handles
all the ARP/ND and routing functions.
We have performed 3 runs of the tests. For each run,
we had a single centralized L3 gateway and multiple
L2 VTEPs. We sent intra- and inter-subnet traffic simulta-
neously and verified that L2 VTEPs forwarded intra-
subnet traffic, and only inter-subnet traffic was forward-
ed to the centralized L3 gateway.
12
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
The following vendors participated successfully in this During this test, one of the combinations had issues
test case: where the L3 GW responsible for providing the ARP
response did not respond to the ARP request of one of
Centralized L3 Gateway: Arista 7280R3, Aruba
the vendors, which caused the traffic not to flow. We
CX8325, Juniper PTX10001
left it for further investigation.
L2 VTEP: Arista 7280R3, Aruba CX8325, Aruba
CX8360, Aruba CX10000, Juniper QFX5120 The following vendors participated successfully in this
test case:
CE: Arista 7050SX, Juniper QFX5110
Traffic generator: Keysight IxNetwork Centralized L3 Gateway: Arista 7280R3, Aruba
CX8325, Juniper PTX10001
L2 VTEP: Arista 7280R3, Aruba CX8325, Aruba
CX8360, Aruba CX10000, Juniper QFX5120
CE: Arista 7050SX, Juniper QFX5110
Traffic generator: Keysight IxNetwork
13
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
EVPN-VXLAN to EVPN VXLAN VXLAN stitching was enabled on the iBGP node to opti-
mize the VXLAN tunnel number between 2 DCs. We
Tunnel Stitching for DCI had three runs for the test case: VLAN-based scenario,
With the data center's scaling, the number of VXLAN VLAN-aware bundle scenario, and L3 gateway scenar-
tunnels will also increase dramatically. It will burden io. We verified that bridging traffic (VLAN-based and
the DC gateways between the DC and WAN networks. VLAN-aware bundle) and routing traffic (L3 gateway)
Therefore, an demand for optimizing the VXLAN tunnel worked well without packet loss.
between DC and WAN networks is present. VXLAN The following vendors participated successfully in this
tunnel stitching is a solution for it. VXLAN stitching test case:
stitches together specific VXLAN Virtual Network Identi-
fiers (VNIs) to provide Layer 2 stretch between data VXLAN tunnel stitching gateway: Arista 7280R3, Juni-
centers on a granular basis. per QFX5130, Nokia 7750 SR-1 (VLAN-based only)
PEs: Arista 7050X3, Juniper ACX7100-32C, Juniper
We simulated 2 DC and EVPN domains. eBGP was
PTX10001, Juniper QFX 5120, Spirent-STC
used to build EVPN-VXLAN inside the same DC/EVPN
domain. iBGP was used to create EVPN-VXLAN be- CE: Arista 7050SX, Juniper QFX5110
tween 2 DC through the WAN. Traffic generator: Spirent-STC
14
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
15
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Then we shut down the DR link to simulate the link fail- E-Tree Service
ure scenario in the real world.
The EVPN E-Tree service is a rooted-multipoint only over
The other two PEGs performed a DR election and chose MPLS running core defined by MEF (Metro-Ethernet
a new DR to continue forwarding the multicast traffic. Forum). In this service, each customer site has a label
And then shut down the link of the second DR and force as a Root or Leaf site. A Leaf AC can only send and
the last PEG to be the DR. receive traffic only from Root ACs and a Root AC can
send traffic to another root or any other Leaves. To
achieve ingress filtering, the ingress PE should color the
ingress MAC addresses with a Root or Leaf indication
before advertising them to the other PEs.
We observed firstly, the network status including the
IGP sessions and BGP EVPN sessions establishment
and Leaf/Root tags. Secondly, Unicast/Broadcast traf-
fic from Roots to Roots, Roots to Leaves, and Leaves to
Roots is generated without packet loss. Finally, we veri-
fied filtered Unicast/Broadcast traffic from Leaves to
Leaves.
In this test, Cisco ASR 9903 and Nokia 7750 SR-1
participated as PE devices with both Root and Leaf
ACs. Huawei NetEngine 8000 F8 and Juniper MX204
participated as PE devices with Leaf ACs, and Arista
7280R3 participated as a PE with Root AC. Also, we
had Arista 7280R and Cisco XRd as Route Reflectors
and Arista 7050SX3 as CE device. Spirent-STC partici-
pated as Traffic Generator.
16
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
IPv6 BGP Unnumbered Underlay, Overlay sent bidirectional intra- and inter-subnet traffic from sim-
ulated IPv4 hosts through IPv6 underlay and overlay
and VTEP networks. We saw no packet loss during the test.
The IPv4 addresses running out is not news anymore. The following vendors participated successfully in this
The world starts to move to IPv6 slowly, and the DC test case:
network cannot avoid moving to IPv6. RFC 5549 offers
the solution to forward IPv4 overlay traffic in an IPv6 IPv6 PEs: Arista 7050X3, Arista 7280R3, Juniper
underlay network. It also uses the IPv6 address stateless QFX5120, Juniper QFX5130, Keysight IxNetwork
autoconfiguration to reduce the DC network deploy- CE: Arista 7050SX, Juniper QFX5110
ment process within the same DC. Once the underlay is Traffic generator: Keysight IxNetwork
IPv6-ready, the VTEP should also move to IPv6 eventual-
ly. However, the simulated end host in the test was still E-Line Service
IPv4, and we plan to test a dual-stack and purely IPv6
network next year to demo the path to IPv6. The EVPN VPWS (E-Line) is a point-to-point service mod-
el with a BGP control plane architecture. It provides
Layer 2 connectivity between two or more customer
sites over the provider’s MPLS/IP core network and
forwards traffic without MAC address lookup. In addi-
tion, this service supports single-active or all-active multi
-homing capabilities.
We created a mix of multi-homing and single-homing
PEs for the E-Line service verification. The test steps in-
cluded verifying IGP and MP-BGP sessions and VPWS
signaling, DF election for single-active multi-homing, or
traffic load balancing for all-active multi-homing ESs.
We also monitored how the service behaves both when
a link failure occurs and when it restores.
17
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
1 Huawei NetEngine 8000 F8 with Cisco ASR 9903 Arista 7280R3 with Cisco NCS 57C1-48Q6
4 Huawei NetEngine 8000 F8 with Arista 7280R2 Juniper MX204 with Cisco NCS 57C1-48Q6
5 Arista 7280R2 with Nokia 7750 SR-1 Juniper MX204 with Cisco NCS 57C1-48Q6
6 Ribbon NPT-2100A with Nokia 7750 SR-1 Juniper MX204 with Cisco NCS 57C1-48Q6
7 Cisco ASR 9903 with Ribbon NPT-2100A Juniper ACX7100-32C with Cisco NCS 57C1-
48Q6
8 Ribbon NPT-2100A with Huawei NetEngine Juniper ACX7100-32C with Cisco NCS 57C1-
8000 F8 48Q6
9 Nokia 7750 SR-1 with Juniper ACX7100 Cisco NCS 57C1-48Q6 with Arista 7280R3
10 Cisco ASR 9903 with Nokia 7750 SR-1 Cisco NCS 57C1-48Q6 with Arista 7280R3
Flexible Cross-Connect
The Flexible Cross-Connect (FXC) service is introduced
to aid service providers with a large number of ACs
that require backhauling across their MPLS/IP core net-
work. It achieves this by multiplexing multiple ACs into
a single EVPN VPWS service tunnel associated with a
VPWS service ID, thereby reducing the EVPN BGP sig-
naling and associated EVPN labels to VPWS tunnels.
These optimizations are particularly useful for those
who use low-end access routers that may face label
resource challenges.
We performed six test runs with single-homed, multi-
homed, VLAN-Unaware and VLAN-Aware configura-
tions. We verified IGP and MP-BGP sessions and
VPWS signaling. Then we generated bidirectional
unicast traffic toward the DUTs without any packet loss.
CE Device: Arista 7050SX3
Traffic Generator: Spirent-STC
We observed 1% packet loss while two vendors were
pairing with each other.
Figure 24: Flexible Cross-Connect
18
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
6 Multi-Homed VLAN Aware All-Active Ciena 5166 and Juniper ACX7100 Nokia 7750 SR-1
19
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
EVPN-VPLS Seamless Integration In the first run, we used LDP signaling for VPLS; Cisco
ASR 9903, Nokia 7750 SR-1, and Huawei NetEngine
VPLS is a point-to-multipoint Layer 2 VPN service that 8000 F8 participated as EVPN/VPLS PEs, while Arista
provides Layer 2 connectivity between geographically 7280R3 and Juniper MX204 participated as VPLS PEs.
separated data centers or customer sites across a pro-
vider’s MPLS/IP backbone. VPLS is a widely deployed In the second run, where we used VPLS with BGP sig-
Layer 2 service worldwide. EVPN, on the other hand, naling, Cisco ASR 9903 and Nokia 7750 SR-1 partici-
can provide features including scalability, resiliency, pated as EVPN/VPLS PEs. Juniper MX204 participated
control plane MAC/IP learning, all-active multi-homing, as VPLS PE. Spirent-STC participated as Traffic Genera-
and MAC mobility. Some service providers prefer to tor.
integrate their existing VPLS network with the new
EVPN running network without any changes to the ex-
isting VPLS. The seamless migration can be done on a
site-by-site basis per VPN instance and must allow the
coexistence of VPLS and EVPN simultaneously. A PE
device may serve some customers using VPLS, while
others might have been migrated to EVPN.
In two runs, we conducted this test with both LDP and
BGP signaling for VPLS. After verifying both tests’ IGP
and BGP status, we started sending traffic toward the
PE devices and proved that all the PEs were using VPLS
PWs to forward traffic. Then we enabled EVPN on
EVPN/VPLS PEs. As soon as the EVPN service came
up, PEs advertised EVPN Inclusive multicast routes and
route type-2 and discovered each other through EVPN
routes. As a result, EVPN-enabled PEs shut down the
PWs between each other and forwarded traffic using
EVPN service; however, they kept forwarding traffic to
VPLS PEs using VPLS pseudowires.
20
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Run GW Peering
21
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
In the third scenario, we verified the interworking of SR- The following vendors participated successfully in this
MPLS with IP VPN Routes and EVPN VXLAN v6 (IPv6 test case:
VTEPs).
Gateways: Huawei NetEngine 8000 F8, Nokia 7750
The fourth and fifth scenarios verified the interworking SR-1, Arista 7280R3, and Cisco ASR 9903
of SR-MPLS and EVPN VXLAN. In scenario four, we SR-MPLS PE devices: Arista 7280R3, SRv6
had IP VPN Routes in the SR-MPLS domain, while we
PE devices: Arista 7280R3, Cisco ASR 9903
changed it to EVPN Route Type-5 in the fifth scenario.
Traffic Generator: Spirent-STC
22
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
23
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Segment Routing—SRv6
Segment Routing version 6 (SRv6)(RFC 8754, RFC
8986) has become a powerful option for meeting the
changing needs of modern networks. It enables net-
work administrators to set up and control network
routes in a more granular and flexible manner, allow-
ing the development of customized network services to
satisfy the demands of particular applications and user
groups. It also makes network operations easier by
minimizing the number of protocols and control planes
required to run the network.
For the first time in our annual interoperability event,
we conducted tests on multi-vendor SRv6 with micro
segment (µSID for short). The µSID solution is an exten- Figure 36: L3VPN over SRv6 (µSID)
sion to the SRv6 Network Programming model (RFC
These devices participated successfully as:
8986)which allows the expression of SRv6 segments
with a very compact and efficient representation. It is PE: Arista 7280R, Arrcus UfiSpace S9600, Cisco
defined as the NEXT Compressed-SID flavor in IETF 8201, Cisco ASR 9902, Huawei NetEngine 8000 F8,
draft draft-ietf-spring-srv6-srh-compression. Juniper ACX7100-32C, Keysight IxNetwork, Nokia
7750 SR-1, Spirent-STC
These tests covered SRv6 BGP based Overlay services
(RFC 9252), including: L3VPN, EVPN VPWS, EVPN Router Reflector: Cisco ASR 9902
RT5, and EVPN LAN.
Additionally, we confirmed several underlay test cases,
including TI-LFA (Topology-Independent Loop-Free Alter-
nate) Flex Algo, summarization, UPA (Unreachable
prefix Announcement), and SR-TE policies while imple-
menting µSID.
Additionally, we verified most of the previous tests us-
ing the full SID.
24
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
25
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
The Ethernet Auto-Discovery route (Route Type 1) was Cisco NCS Huawei NetEngine
used to advertise point-to-point service IDs while config- 540-28Z4C 8000 F8
uring the locator to support END.DX2 (that specifies
endpoint decapsulation and L2 cross-connect behav- Nokia 7750 SR-1 Huawei NetEngine
ior).
For a single home scenario, each node had configured Huawei NetEngine Spirent-STC
the same EVPN Instance (EVI) route and enabled BGP
protocol to advertise and accept the EVPN NLRI for
Nokia 7750 SR-1 Keysight IxNetwork
SRv6 services.
26
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
27
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
EVPN E-LAN using µSID The setup consisted of three PE routers that worked as
ABRs between two ISIS areas. The locator summary
We verified multipoint-to-multipoint Ethernet services was configured in both directions using the three ABRs.
over an SRv6-based network using µSID. Of these, two sent Locator summary advertisements
through both the IP Prefix Reachability TLV and the
SRv6 Locator TLV, while one sent them only through the
IP Prefix Reachability TLV. We verified the received
prefixes by checking the ISIS database.
This resulted (SRv6 Locator TLV is missing) for the PE in
L2 not being able to resolve the service route, and traf-
fic from L2 to L1 used the remaining ABRs.
The Summarization was accomplished by the following
ABRs: Cisco ASR 9902, Huawei NetEngine 8000 F8,
Nokia 7750 SR-1, as PEs Juniper ACX7100-32C and
Spirent-STC
28
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
We conducted the test where we evaluated the perfor- SRv6 TE SR Policies with Explicit Paths
mance of the Flex algorithm using delay metrics. Some
participant nodes utilized dynamic link delay measure- SRv6 traffic engineering (SRv6 TE) utilizes the concept
ment (TWAMP) for path calculation and incorporated of source routing, where the origin calculates the route
the resulting latency values in ISIS (RFC 7810). and encodes it in the packet header as a sequence of
segments. This sequence of segments is added to the
Additionally, the summarized locators in the flex algo- incoming packet via the SRv6 Segment Routing Header
rithm were displayed with aggregated metrics. (SRH).
We verified the data plane with the correct traffic pass- To control the flow of traffic through the network, SRv6
ing through FA 128 plane with no issues. traffic engineering utilizes a policy that contains groups
of segments.
Unreachable Prefix Announcement An explicit policy in SRv6 traffic engineering is a col-
According to "draft-ppsenak-lsr-igp-ureach-prefix- lection of IPv6 addresses that represents an ordered list
announce-02", when summarization is used, it is im- of segment IDs. The policy path is predetermined as the
portant to notify the network of a loss of reachability to operator defines the segment list statically.
a specific prefix that is included in the summary. This To create an explicit policy, vendors established a seg-
enables quick convergence away from paths that lead ment list(s), provided a policy name, endpoint, and
to the node which can no longer be reached. color, and then linked it to a segment list from the poli-
In this test, we verified the process advertise such a loss cy. This test was completed using both µSID and Full
of prefix reachability using the Unreachable Prefix An- SID.
nouncement (UPA).
The setup included an ABR that was in charge of the
summary, an Ingress PE, and two egress PEs. When the
ABR loses connectivity to one of the nodes in domain
2, it identified that the node's locator is included in the
summary prefix and created an Unreachable Prefix
Attribute (UPA) for that locator and distributed it in do-
main 1. After receiving the UPA via IGP, the Ingress PE
switched to the backup path, and this transition took
134 ms for convergence.
29
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
30
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Segment Routing—SR-MPLS
Segment Routing Multi-Protocol Label Switching (SR-
MPLS) has emerged de-facto industry standard to meet
the requirements of modern networks as the world be-
comes more interconnected and reliant on high-speed
data transfer. Lately, there were many efforts to estab-
lish an end-to-end intent-aware path across multi-
domains of service provider environments. In EANTC,
we tested a BGP-based routing solution dedicated to
this goal, called the BGP Classful Transport Planes. As
SR-MPLS is particularly popular in Inter-AS (Autonomous
System) networks, which connect numerous network
domains owned by different enterprises, we tested sev-
eral mechanisms, such as BGP LS (Link State), Flexible
Algorithm Prefix Metric (FAPM), and chaining ASs op-
tions. This year, we placed significant emphasis on
OSPF segment routing, which included the implementa-
tion of Flex Algo and FAPM (which was done for the
Figure 53: TI-LFA over SRv6 using µSID
first time in our interop event), as well as covering
PLR node: Arrcus UfiSpace S9600-72XC, Cisco ASR mechanisms for fast re-routing, performance measure-
9902, Huawei NetEngine 8000 F8, Nokia 7750 SR-1. ment, failure discovery, and SR traffic steering.
PQ nodes: Arrcus UfiSpace S9600-72XC, Huawei
NetEngine 8000 F8 L3VPN Services
P nodes: Arrcus UfiSpace S9600-72XC, Juniper As a preliminary test for interoperability in SR-MPLS, we
ACX7100-32C utilized L3VPN services. The participating nodes were
Traffic Generator: Spirent-STC interconnected in a spine-leaf topology and established
ISIS/OSPF sessions with one another. The routing ta-
bles contained the loopback addresses and corre-
sponding SIDs of the involved PEs. After verifying that
all VPNv4 and VPNv6 services were operational on the
vendor devices, we proceeded to generate IPv4 and
IPv6 traffic between each pair of PEs, which did not
result in any packet loss.
Flexible Algorithm
Flexible Algorithm over ISIS
IGP protocols historically compute the best paths over
the network based on the IGP metric assigned to the
links. IGP Flexible Algorithm (RFC 9350) enhances
IGPs to compute the best paths based on a given com-
bination of calculation-type, metric-type, and con-
straints. With Flexible Algorithm an operator can asso-
ciate one or more SR-MPLS Prefix-SIDs or SRv6 locators
with a particular Flex-Algorithm. Each such Prefix-SID or
SRv6 locator then represents a path that is computed
Figure 56: L3VPN over SR-MPLS (OSPF) according to the identified Flex-Algorithm.
The following devices passed the test as PEs: Arista In our test, we confirmed the generation of multiple net-
7280R, Cisco NCS 540-24Q8L2DD, Ericsson 6673, work planes utilizing the flex algorithm based on ISIS.
Huawei NetEngine 8000 F8, Juniper ACX-7100, Juni- We employed three different flex algorithms, FA 128
per PTX10001-36MR, Nokia 7750 SR-1 was based on the minimum delay metric, FA 129 was
based on IGP metric and exclusion of interfaces with a
Traffic Generator for both: Keysight IxNetwork
given link administrative group (green affinity) and FA
130 was relying on the TE metric. All participants had
SR-MPLS OAM
TE attributes advertised in Flex-Algo specific Applica-
The ability to quickly identify and troubleshoot network tion-Specific Link Attribute (ASLA) sub-TLVs. One vendor
failures is essential for network operators. To help with could not generate SID per Flex-Algo with a single
this task, RFC 8287 defines a set of tools for detecting loopback IP, so they did not participate. Two vendors
and diagnosing network issues, including Label had to utilize a knob to prevent fallback to the native
Switched Path (LSP) Ping/Traceroute for Segment Rout- algorithm 0 LSP. With that fallback Flex-Algo 129 with
ing IGP-Prefix Segment Identifiers (SIDs) with MPLS Da- "exclude green" worked. Since Ribbon was one of the
ta Plane. These tools are widely used in networks to test vendors that only supported Flex Algo Legacy and not
connectivity, measure latency, and identify the location ASLA, we conducted a test with the Legacy Flag ena-
of network faults. In this context, we conducted a verifi- bled. This test included two different constraints: the use
cation of the troubleshooting and failure detection tool. of a manually delayed metric with algorithm 128 and
All participants successfully performed the test except a TE metric with algorithm 130.
for one vendor that supports only ping over SR-MPLS.
For Legacy test: Juniper PTX10001-36MR, Keysight Flex Algo Prefix Metric over OSPF
IxNetwork, Ribbon NPT-2100 Flexible Algorithm can provide the optimal path to a
destination in a remote area or IGP domain. The
Flexible Algorithms over OSPF RFC9350 outlines a sub-TLV for OSPF Flexible Algo-
rithm Prefix Metric (FAPM) so the calculation of the best
path across multiple areas will take into account the
constraints used for Flexible Algorithm paths.
33
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Traffic Generator: Keysight IxNetwork, Impairment de- that ensures sufficient bandwidth or minimal delay for
vice: Calnex SNE these high-priority traffic flows.
FAPM to allow optimal end-to-end path for an interarea RFC 9256 (SR Policy Architecture) details the concept
prefix. The area border router (ABR) must include the of an SR Policy and its associated steering mechanisms.
FAPM when advertising the prefix between areas that
is reachable in that given Flexible Algorithm. The test- A headend can steer a packet flow into a valid SR Poli-
ing was conducted in a setup comprising three OSPF cy in various ways:
areas, with both Flex Algo 128 (based on delay metric) • Binding SID Steering: Incoming packets have an ac-
and Flex Algo 129 (based on IGP metric) configured in tive SID matching a local BSID at the headend.
all areas. In order to advertise a prefix between areas, • Per-Destination Steering: incoming packets match a
the area border router (ABR) included the FAPM for the BGP/Service route, which recurses on an SR Policy
corresponding Flex Algo. To establish a tunnel between • Per-Flow Steering: incoming packets match or re-
the endpoints (PEs), the delay-metric FA128 was uti- curse on a forwarding array of which some of the
lized to select the path with the least delay through all entries are SR Policies.
three domains. When there was a change in delay
• Policy-Based Steering: incoming packets match a
within the middle area, the tunnel was switched to en-
routing policy that directs them on an SR Policy.
sure that the delay-metric FA128 path with the least
delay was maintained. Initially, SR TE was implemented in SR-MPLS using bind-
ing SIDs then prefix-based steering.
The test was carried out successfully with the following
devices: Cisco NCS 540-24Q8L2DD, Juniper MX204, Finally, traffic flow characteristics, such as the DSCP
Juniper PTX10001-36MR, Juniper ACX7100-32C, value, were employed to steer traffic along a specific
Nokia 7750 SR-1, Traffic Generator: Keysight IxNet- path in order to optimize network performance.
work, Impairment device: Calnex SNE Keysight IxNetwork was used as a traffic generator for
these tests.
In order to enable FAPM, the FAD flags-TLV requires the
M-flag to be set when advertising to ensure OSPF rout-
ers use Flex-Algorithm aware metrics for inter-area rout-
ing. During testing, one vendor had to correct their M-
flag implementation to successfully complete the test.
PE 1 (Head End) PE 2
Nokia 7750 SR-1 Juniper PTX10001-36MR
Juniper PTX10001-36MR Nokia 7750 SR-1
Huawei NetEngine 8000 F8 Nokia 7750 SR-1
Cisco NCS 540-24Q8L2DD Nokia 7750 SR-1
Ciena 5166 Juniper PTX10001-36MR
Nokia 7750 SR-1 Huawei NetEngine 8000 F8
Huawei NetEngine 8000 F8 Cisco NCS 540-24Q8L2DD
Cisco NCS 540-24Q8L2DD Huawei NetEngine 8000 F8
Ribbon NPT 2100A Huawei NetEngine 8000 F8
Nokia 7750 SR-1 Cisco NCS 540-24Q8L2DD
PE 1 (Head End) PE 2
Nokia 7750 SR-1 Juniper PTX10001-36MR
Juniper PTX10001-36MR Nokia 7750 SR-1
Huawei NetEngine 8000 F8 Ribbon NPT-2100A
Cisco NCS 540-24Q8L2DD Nokia 7750 SR-1
Nokia 7750 SR-1 Cisco NCS 540-24Q8L2DD
Cisco NCS 540-24Q8L2DD Arista 7280R
Arista 7280R Cisco NCS 540-24Q8L2DD
Ericsson 6673 Juniper PTX10001-36MR
PE 1 (Head End) PE 2
Nokia 7750 SR-1 Juniper PTX10001-36MR
Juniper PTX10001-36MR Nokia 7750 SR-1
Cisco NCS 540-24Q8L2DD Nokia 7750 SR-1
Ciena 5166 Juniper PTX10001-36MR
Nokia 7750 SR-1 Huawei NetEngine 8000 F8
Huawei NetEngine 8000 F8 Cisco NCS 540-24Q8L2DD
Cisco NCS 540-24Q8L2DD Huawei NetEngine 8000 F8
Ribbon NPT-2100A Huawei NetEngine 8000 F8
Nokia 7750 SR-1 Cisco NCS 540-24Q8L2DD
Huawei NetEngine 8000 F8 Ribbon NPT-2100A
Keysight IxNetwork all previous devices
35
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
36
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Inter AS SR-MPLS
Inter-AS connectivity is an essential aspect of modern
network design that enables Service Providers to offer
end-to-end services across multiple autonomous systems
(AS). RFC 4364 describes two widely deployed meth-
ods for achieving inter-AS connectivity: Inter-AS Option
B and Inter-AS Option C. These methods provide Ser-
vice Providers with the flexibility to extend their net-
works while maintaining control over their own routing
policies. Figure 65: Inter AS Option C
We have conducted tests for both inter-AS options B
and C according to RFC 4364. However, during our
testing, we encountered an issue between two Autono-
mous Boundary Routers (ABRs), as each ABR supported
different SR Global Block (SRGB) ranges. To resolve
this issue, we introduced a third ABR which is capable
of stitching labels between inconsistent SRGB and/or
dynamic label ranges. This validated the co-existence
of BGP-SR with domains with heterogenous SRGBs
and/or non-SR domains.
Figure 66: Inter AS Option B
Arista 7280R and Ericsson 6673 tested as the ABRs
and Ciena 5166 and Huawei NetEngine 8000 F8 test-
ed as PEs.
37
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Figure 67: LDP and SR Interworking Figure 68: SR-MPLS Delay Measurement using TWAMP
PE1 PE2
38
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Topology Independent Loop Free Alternative The following devices successfully participated in the
test: Arista 7280R, Ciena 5166, Ericsson 6673,
over SR-MPLS Huawei NetEngine 8000 F8, Ribbon NPT-2100A
To test the link and SRLG TI-LFA over an SR-MPLS net- Traffic Generator: Keysight IxNetwork
work, we created a topology consisting of four nodes,
with each participating vendor configuring the network
nodes for an L3VPN service. Prior to the link failure,
traffic was forwarded from the ingress PE (PLR) to the
directly connected egress PE. To simulate a link failure,
we asked the egress PE vendor to disconnect the pro-
tected link between the egress and ingress nodes while
traffic continued to flow from the generator toward the
ingress PE.
Out-of-service times ranged from 3ms to 34ms. For lo-
cal SRLG, PLR nodes used a port to repair the link fault,
regardless of the cost, because it shared the same
SRLG as the failed port. Failover times ranged from
3ms to 15ms for the two combinations we tested.
39
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Seamless BFD
Seamless BFD, or S-BFD, simplifies BFD usage by elimi-
nating a large proportion of negotiation aspects, which
leads to quick provisioning and improved control and
flexibility for network nodes initiating path monitoring.
In a test, we verified the ability of an SR policy to steer
traffic into an SR-TE tunnel, and how S-BFD can detect
link failures and trigger SR-TE hot standby protection.
To perform the test, we created a triangle topology con-
sisting of an egress PE, ingress PE, and one P router.
Each pair of PEs was configured with two SR-TE poli-
cies: a primary path and a backup. The initiator inter-
val was set to 20 ms, allowing an acceptable out of
service time between 40-100ms. We generated traffic
between the Initiator and Reflector through the P node,
which served as the longer primary path. To demon-
strate S-BFD's role in network convergence, we emulat-
Figure 72: Seamless BFD
ed a tear-down session by shutting down a remote port
and observed the traffic switch to the backup SR MPLS The following devices participated successfully as Initia-
TE path in 75ms. tor and reflectors: Ericsson 6673, Ribbon NPT-2100A
We also verified S-BFD sessions established between
different vendors and configured an ACL to filter BFD IPv6 BGP-LU
packets, which resulted in the sessions being down and
BGP-LU (Labeled Unicast) is used to provide connectivi-
the traffic switching to the backup path.
ty between regions by advertising PE loopbacks and
label bindings.
In this test, we verified using BGP to exchange reacha-
bility information among the routers in the network,
including the IPv6 prefixes and the next-hop infor-
mation.
The Spine node established BGP peering with two
neighbors and activated labeled-unicast for IPv6 ad-
dress family, allowing the router to forward traffic using
MPLS labels. The Leaf nodes were configured to adver-
tise the BGP prefix SID attribute in the BGP LU NLRI.
40
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
41
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
42
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
• The DUTs initiated the IGP adjacencies, and we con- This test confirmed that the Path Computation Element
firmed that the connection was established. IS-IS was with PCEP can effectively signal Segment Routing poli-
used as the IGP for this test. cies to Path Computation clients in a multi-vendor envi-
ronment. The main goal of the test was to verify that the
• We validated the Stateful PCEP session, PCE path PCE and each PCC could operate together seamlessly
instantiation, and LSP state synchronization. without one PCC's colored policy signaling being reli-
• Instead of creating VPN services, we used LSP-pings ant on the other PCC.
to ensure that transport paths were installed correctly To conduct the test, the following steps were per-
for this test. formed. First, the DUTs established IGP adjacencies
using IS-IS, which were confirmed to be established.
The combinations that completed the test with SR-MPLS
Second, the PCEP session, PCE path instantiation, and
as data plane and PCC-initiated path are listed in Ta-
LSP state synchronization were validated. Third, LSP-
ble 16.
pings were used instead of creating VPN services to
verify proper path initiation. Finally, the PCE signaled a
PCE PCC colored SR-Policy to the PCC, and the test was conduct-
ed using SR-MPLS and SRv6 as the data plan.
Juniper Paragon Cisco NCS
Pathfinder 540-24Q8L2DD
With it, an SR Policy is modeled in PCEP as an Associ- Table 17 shows the combinations that interoperate
ation of one or more SR Candidate Paths. PCEP exten- seamlessly over SR-MPLS data plane.
sions are defined to signal additional attributes of an
SR Policy which were not covered by [RFC8664].
43
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
44
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
45
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Nokia Network Service Platform - NSP Ciena 5166 Cisco NCS 540-24Q8L2DD
46
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Nokia Network Service Platform - Nokia 7750 SR-1 Huawei NetEngine 8000 M4
NSP
47
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
48
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
49
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Grandmaster Boundary Clock 1 Boundary Clock 2 Slave Clock Time Error Analyzer
Microchip Ericsson 6673 Cisco NCS Ericsson 6675 Calnex Sentry
TimeProvider 4100 540X-16Z4G8Q2C
Microchip Ericsson 6673 - Juniper Calnex Sentry
TimeProvider 4100 ACX7100-48L
Microchip Ericsson 6673 - Intel E810- Calnex Sentry
TimeProvider 4100 XXVDA4T
Table 24: Assisted Partial Timing Support Delay Asymmetry
50
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
During two testing combinations, the devices which We employed three distinct methods for conducting the
have played the role of Boundary Clock-2 and had to test, whereby the BC was linked to both GMs and set
use the 8275.2 PTP profile, had connectivity problems, to lock on GM-A in all three methods.
and confusion with the configuration for this profile, so
We initiated the measurement process by detaching
we had to exclude them from the test, and continue
GM-A from the GNSS, which led to the BC locking
with one Boundary Clock, and one Slave Clock.
onto GM-B. Nevertheless, we introduced the asymmetry
Time to lock and stabilize for devices using the subsequent to the GM switchover.
G.8275.2 profile is typically longer than those using
First Approach—Manual Delay Compensation: The BC
G.8275.1, which limited actual measurement duration.
followed the asymmetry and them the vendors engi-
Delay Asymmetry Measurement neers compensated the delay manually through the CLI
commands. The following combinations have passed
The second asymmetry-related test performed was to the test with this approach with 500 nanoseconds as
detect and compensate the asymmetry either automati- one-way delay.
cally, when supported, or manually.
Second Approach—Automatic Delay Compensation:
The test cases are designed to cover the different meth- The BC followed the asymmetry and them the vendors
ods vendors have of handling compensation in real- engineers compensated the delay manually through the
world implementations. CLI commands. The following combinations have
The test topology consisted of: passed the test with this approach with 500 nanosec-
onds as one-way delay.
• Grandmaster-A (GM-A) connected to the GNSS as
reference, used as main reference for the topology. Third Approach—Manual Delay Compensation with
GNSS Reference: The Boundary Clock, in this ap-
• Grandmaster-B (GM-B) connected to GNSS, used as proach needed the manual compensation to overcome
backup. the introduced one-way delay, and needed the GNSS
• Boundary Clock (BC), connected to both GMs and reference to detect the asymmetry. The one-way delay
configured using local priorities to select GM A was 500 nanoseconds for this combination.
when it is (or both GMs are) locked to GNSS No interoperability issues were observed during this
test, the only problem we faced was the lacking of
Boundary Clocks which can have 2 slave ports with
different PTP profiles as planned originally.
51
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Boundary Clock Class C/D Conformance Test The test was done by using the Calnex Paragon-neo to
emulate the Grandmaster and the Slave Clock, with the
Class C/D Boundary Clocks have been specifically device under test connected directly as the Boundary
designed to fulfill the stringent demands for time syn- Clock. As per the requirements of G.8273.2, we meas-
chronization in modern networks, thereby facilitating ured the low pass filtered two-way time error with an
top-notch applications like ultra-reliable low-latency applied limit of 5ns.
communication, which are integral to 5G mobile net-
works. The boundary clocks enabled both PTP and SyncE on
the link towards slave clock, as they configured the PTP
Over the years at the EANTC MPLS SDN Interoperabil- 8275.1 hybrid profile. Additionally, we have per-
ity event, we have had the opportunity to witness the formed the conformance test for Boundary Class C,
advancements in Boundary Clocks. Two years ago, it complying with the latest ITU-T G.8273.2 Clause 7.1.4
was unusual for a device to meet Class D specifica- by measuring the relative constant time error between
tions, whereas this year almost all Boundary Clocks two ports of the boundary clock.
tested passed the Class D conformance test.
Based on observations from previous years that a de-
This test is not one of interoperability as it tests the time vice's time error performance may vary across its differ-
error performance of only a single device, but it was ent ports speeds, some devices were tested at various
used to qualify devices before their participation in the line rates.
class D chain tests
52
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
53
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
54
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
High-Precision Clocking Source Failover At the test start, both GM-A and GM-B were locked to
their GNSS inputs, and the Boundary Clock was locked
Testing time synchronization in a well-controlled lab via PTP to GM-A. The first measurement phase ran for
environment typically yields favorable outcomes, but 1000 seconds (to allow calculation of the Constant
also fails to represent real-world conditions. This is the Time Error, cTE) The GNSS input to GM-A was then
motivation behind this test case, which measures the disconnected, causing the BC to select GM-B as its ref-
time error produced by the Boundary Clock in a realis- erence. The next step required also disconnecting
tic topology with redundant Grandmasters. These GNSS from GM-B, forcing the BC to re-lock onto GM-A
Grandmasters may encounter GNSS connectivity inter- based on the configured local priorities. GNSS was
ruptions which cause changes in the PTP source used reconnected to GM-B and then to GM-A. 1PPS and two
by the Boundary Clock. This test case measures the -way-Time Error outputs from the Boundary Clock were
time error of the Boundary Clock during the switchover measured at each of these steps.
between Grandmaster references (i.e. during the
BMCA event), and also when the Boundary clock is in Passing this test requires that the measured time error at
holdover due to both Grandmasters having lost GNSS the Boundary Clock output meets G.8271 accuracy
connectivity. The used test topology consisted of: level 6 or better, i.e. ≤ 260 ns.
• Grandmaster-A (GM-A) connected to the GNSS as No interoperability issues were seen during this test
reference, used as main reference for the topology. case, but one Grandmaster was observed to transmit
• Grandmaster-B (GM-B) connected to GNSS, used as clock Class 6 after disconnection from GNSS (rather
backup. than clock Class 7), which is non-compliant with the
requirements of the relevant ITU-T recommendation
• Boundary Clock (BC), connected to both GMs and
G.8275.1, clause 6.4.
configured using local priorities to select GM A
when it is (or both GMs are) locked to GNSS
55
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Time Synchronization Source Failover was locked with both PTP and SyncE on the GM A. The
first phase of measurement was started for 1000 sec-
This test was a part of proposed resiliency tests of time onds, to be able to calculate the Constant Time Error,
synchronization, with adding an additional boundary then the GNSS connection of the GM A was discon-
clock at the end of the chain, which makes the topolo- nected, causing the BCs to choose the GM B as source.
gy more real-life scenario. The used test topology con- The next step started with disconnecting the GM B from
sisted of:
the GNSS causing the BCs to re-lock on the GM A as
• Grandmaster A (GM A) connected to the GNSS as both GMs have no GNSS and the local priorities were
reference, used as main reference for the topology. set on the BC to do so.
• Grandmaster B (GM B) connected to GNSS, used as Then we reconnected GM B, then GM A while we are
backup. measuring the 1PPS, 2 Way-Time Error from the output
• Boundary Clock-1 (BC-1), connected to both GMs of the Boundary Clock.
and configured to prefer the GM A as long as it has This test aimed to keep the time error from the output of
the GNSS antenna, using the local priorities of the the Boundary Clocks within the G.8271 accuracy level
links. 6, all the following combinations passed the test suc-
cessfully:
• Boundary Clock-2 (BC-2), connected to BC-1 and to
the Calnex Paragon-neo providing both PTP and
SyncE measurements
Microchip TimeProvider 4100 Cisco NCS Ciena 5166 Microchip TimeProvider 4100
56
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Chain Ring of Class D Boundary Clocks Initially, all BCs were synchronized to PTP and SyncE
from GM A, and a baseline measurement performed.
This test measured the accumulated time error of a Subsequently, GNSS was disconnected from GM A,
chain of Class D Boundary Clocks in a ring topology, causing all BCs to switch PTP and SyncE reference to
resembling a typical real-world service provider imple- GM B.
mentation. All participating devices had passed the
G.8273.2 Class D Boundary Clock conformance test, GNSS was then reconnected to GM A and a measure-
indicating performance met the maximum absolute time ment performed while all BCs re-established synchroni-
error, low-pass filtered, max|TEL|, limit of 5ns. zation with it and stabilized.
The test topology consisted of: All BCs were configured with a one minute “Wait to
Restore” (WtR) period, meaning each would wait one
• Grandmaster A (GM A) locked to GNSS generating minute before reacting to a change in input quality i.e.
clock Class 6 and ESMC QL-PRC. Used as the prima- before switching reference on receipt of clock Class 6
ry reference for the topology.
when GM A was reconnected to GNSS.
• Grandmaster B (GM B) locked to GNSS, generating
clock Class 6 and ESMC QL-PRC. Used as backup The following devices passed the test successfully:
reference. GM-A: Microchip TimeProvider 4100
• Boundary Clocks (BCs): Seven BCs formed a ring: all
GM-B: Huawei NE8000 M4
BCs were locked on to GM A using their local priori-
ties configuration. BCs: Cisco NCS 540X-16Z4G8Q2C, Juniper ACX-
• Calnex Paragon-neo emulated a Slave Clock and 7100-32C, Intel E810-XXVDA4T, Ericsson 6673, Ci-
measured the PTP time error. ena 5166, Juniper ACX-7100-48L, Microchip TimePro-
• Calnex Sentry measured the 1PPS absolute time er- vider 4100
ror.
57
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Phase/Time Holdover with This has prompted us to investigate and conduct vari-
ous scenarios of time synchronization within the fron-
Enhanced Sync-E Support thaul network to verify its performance and reliability.
Enhanced Synchronous Ethernet (eSyncE) provides For this combination, we tried to emulate the O-RAN
physical layer support to PTP-aware devices in full tim- Fronthaul LLS-C2 (Option-A), with one difference, which
ing support networks, enhancing performance to ena- is the absence of the Distributed Unit. We connected a
ble the stringent synchronization requirements of mod- Grandmaster, a Boundary Clock, and two different
ern telecommunication networks. timing paths starting from the Boundary Clock.
This test verified the ability of a chain of boundary Each timing path had one Hub-Site Router (HSR) and
clocks, configured to use eSyncE, to maintain accepta- one Cell-Site Router (CSR), Both CSR routers were con-
ble values of time error during loss of the Grandmaster nected to the Calnex Paragon-neo analyzer in order to
GNSS reference. eSyncE ESMC messages were also measure the relative PTP time error, and the 1PPS abso-
captured and analyzed to verify that the eSyncE TLV lute error. The test was passed all the measurement
was being processed as required by each device in the requirements stated by the O-RAN Alliance in the docu-
chain. ment O-RAN.WG9.XTRP-TST-v02.00 for FR2.
The following devices passed the test successfully: It is important to state that the O-DU would have in-
Grandmaster: Microchip TimeProvider 4100 creased the time error budget, but with the great results
we have achieved in this test, the O-DU time error
Boundary Clocks: Cisco NCS 540X, Juniper ACX-7100 budget will not affect the test results.
-32C, Intel E810-XXVDA4T, Huawei NE8000 M4, Er-
icsson 6673, Ciena 5166 The devices participated in the test are shown in Table
33. We performed this test to emulate LLS-C3 scenario,
as per O-RAN.WG9.XTRP-SYN-v03.00 document,
O-RAN Fronthaul Network Time where the GM in Midhaul.
Synchronization The topology consisted of:
The Open Radio Access Network, commonly referred • Grandmaster (GM): was placed in the Midhaul and
to as O-RAN, has become one of the most significant connected to the Hub-Site Router.
trends in the telecommunications industry. With its im-
• Hub-Site Router (HSR)
mense potential, professionals across the networking
• Cell-Site Router (CSR)
world are eagerly exploring, testing, and implementing
O-RAN solutions. • Emulated Open Ran Central Unit (Emulated O-CU):
Connected to HSR
One of the most crucial components of an O-RAN ar- • Emulated Open Ran Distributed Unit (Emulated O-
chitecture is its fronthaul network, which plays a vital DU): Connected to CSR
role in the overall system. Ensuring accurate and relia-
• Emulated Open Ran Radio Unit (Emulated O-RU):
ble time synchronization in this area is of paramount
Connected to CSR
importance.
58
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
59
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
The last setup we have tested for O-RAN Fronthaul time Class of service was configured on HSR as well as on
synchronization was the LLC-C3 configuration with the CSR and the following traffic pattern was applied in the
GM from Fronthaul. test network:
• Grandmaster (GM): was placed in the Fronthaul and • eCPRI (O-RU – O-DU) traffic: goes to Low Latency
connected to the Hub-Site Router. queue with 1.7Gbps traffic load.
• Hub-Site Router (HSR) • PTP packets: goes to Network Control queue by de-
• Cell-Site Router (CSR) fault.
• O-CU: Connected to HSR • O-DU – O-CU traffic: goes to a queue with medium
• O-DU: Connected to HSR priority with 1.2Gbps traffic load.
• O-RU: Connected to CSR • Bidirectional background traffic between HSR and
CSR: goes to Best Effort queue with lowest priority
The Time Error measurements were done on the output
with 9Gbps traffic load.
of the CSR node. Keysight IxNetwork was used to simu-
late Midhaul traffic between O-CU and O-DU and All the time error measurements passed the testing re-
ORAN Fronthaul eCPRI traffic between O-DU and O- quirements from O-RAN WG9 documents. No loss was
RU. O-DU and O-RU were configured to simulate reported in O-RAN Fronthaul traffic and also latency
ORAN WG4 CU-plane eCPRI traffic for FDD use case variations were less than a few nanoseconds even with
with 100 MHz carrier bandwidth in both downlink and the heavy presence of background traffic.
uplink direction, 30 KHz sub-carrier-spacing and BFP9 The relative time error was not measured, as only one
IQ compression. Along with these streams, a back- timing path was tested in the topology. For the both LLS-
ground traffic stream is also sent between CSR and C3 setups, the following devices participated success-
HSR to emulate regular traffic in the network. fully.
GM BC 1 BC 2 BC 3 BC 4 BC 5 BC 6 BC 7
Microchip Cisco NCS Juniper Intel E810- Ericsson Microchip Huawei Ciena
TimeProvider 540X- ACX7100- XXVDA4T 6673 TimeProvi- NetEngine 5166
4100 16Z4G8Q2C 32C der 4100 8000 M4
60
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Calculating Time Error Limits Encapsulating PTP packets with Layer 2 or Layer 3 en-
cryption is one way of protecting the time synchroniza-
for Boundary Clocks tion network from being compromised, but the special
In real-life implementations, the time synchronization nature of the PTP and the Hardware Time Stamping,
chains consist usually of multiple Boundary Clocks, this task is very difficult to implement.
which makes it crucial to measure the time error limits These complications prevented the industry from having
at the end of the chain. The Appendix V of the docu- a standard interoperable solution between multiple ven-
ment G.8273.2 Performance estimation for chain of dors yet.
Boundary Clocks, specifies details for calculating limits
for chains of Boundary Clocks. This did not stop us from testing PTP over MACsec be-
tween a boundary clock, and a slave clock from the
We performed this test by creating a chain of multiple same vendor—in this test Juniper—using the 8275.1
boundary clocks, connected to a grandmaster. PTP hybrid profile (SyncE and PTP) and comparing the
We measured the constant time, Dynamic time error – generated absolute 1PPS time error, when MACsec
low pass filtered, Dynamic time error – high pass fil- enabled and disabled.
tered, and maximum absolute time error. We used Microchip TimeProvider 4100 as a grand-
For the Constant Time Error limit, as per the ITU-T rec- master for this setup, and Calnex Sentry for measure-
ommendation, we used the accumulative value depend- ment.
ing on the number of boundary clocks in the chain. In all test steps the output of the slave clock preserved
For other values we used the recommended formula √ an absolute 1PPS time error less than 5ns.
(N x 2) where N is the number of the boundary clock The following devices passed the test successfully:
in the chain.
GM BC 1 SC
PTP over MACsec
Microchip Time- Juniper Juniper
Security of the networks is a fundamental and crucial Provider 4100 ACX7100-32C ACX7100-32C
aspect, in the modern world where the cyber threats
are forming a large chunk of the industry. PTP security Table 35: PTP over MACsec
is left a bit behind, but slowly trying to keep up with the
current level of threats.
61
Multi-Vendor MPLS SDN Interoperability Test Report 2023
Multi-Vendor Interoperability Test 2020
Conclusion
In conclusion, advancements in networking technolo-
gies such as EVPN, SRv6, SR-MPLS, BGP Classful
Transport Planes, OSPF segment routing, SDN, and
time synchronization have significantly improved the
efficiency, flexibility, and scalability of modern net-
works. The EANTC MPLS SDN Interoperability Test
event showcased the successful implementation and
interoperability of these technologies in multi-vendor
environments, covering various services and use cases.
The tests focused on addressing the increasing de-
mands of data centers, 5G networks, and multi-domain
service provider environments. Notably, this year's
event featured the first implementation of uSID in SRv6,
Flex-Algo and FAPM in OSPF segment routing, and the
use of virtualized devices for time synchronization. By
continually pushing the boundaries of networking tech-
nology, these innovations promise to support the grow-
ing needs of our increasingly interconnected world.
62
This report is copyright © 2023 EANTC AG
While every reasonable effort has been made to ensure accuracy and completeness of this publication, the authors assume no
responsibility for the use of any information contained herein. All brand names and logos mentioned here are registered
trademarks of their respective companies.