RTN 380A&380AX V100R009C10 Feature Description 02 PDF
RTN 380A&380AX V100R009C10 Feature Description 02 PDF
RTN 380A&380AX V100R009C10 Feature Description 02 PDF
System
V100R009C10
Feature Description
Issue 02
Date 2019-01-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: [email protected]
Related Versions
The following table lists the product versions related to this document.
Intended Audience
This document describes the main features of the OptiX RTN 380A/380AX microwave
transmission system. It provides readers a comprehensive knowledge of the functionality,
principles, configuration, and maintenance of the product features.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
General Conventions
The general conventions that may be found in this document are defined as follows.
Convention Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
Convention Description
GUI Conventions
The GUI conventions that may be found in this document are defined as follows.
Convention Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Change Description
2.1.4 Feature Dependencies and Added the restriction that 1+1 HSB cannot
Limitations coexist with LPT during cascading.
2.3.5 Feature Dependencies and Added the restriction that PLA cannot
Limitations coexist with LPT during cascading of the
optical splitter mode and LAG mode.
Contents
2 Microwave Features.................................................................................................................... 37
2.1 1+1 HSB....................................................................................................................................................................... 37
2.1.1 Introduction............................................................................................................................................................... 38
2.1.2 Specifications.............................................................................................................................................................40
2.1.3 Feature Updates......................................................................................................................................................... 40
2.1.4 Feature Dependencies and Limitations......................................................................................................................41
2.1.5 Planning Guidelines...................................................................................................................................................43
2.2 Cross Polarization Interference Cancellation............................................................................................................... 43
2.2.1 Introduction............................................................................................................................................................... 43
2.2.2 Specifications.............................................................................................................................................................46
2.2.3 Feature Updates......................................................................................................................................................... 47
2.2.4 Feature Dependencies and Limitations......................................................................................................................47
2.2.5 Planning Guidelines...................................................................................................................................................48
2.3 PLA...............................................................................................................................................................................49
2.3.1 Introduction............................................................................................................................................................... 49
2.3.2 Principles................................................................................................................................................................... 53
2.3.3 Specifications.............................................................................................................................................................56
2.3.4 Feature Updates......................................................................................................................................................... 57
2.3.5 Feature Dependencies and Limitations......................................................................................................................57
2.3.6 Planning Guidelines...................................................................................................................................................58
2.4 Automatic Transmit Power Control..............................................................................................................................58
2.4.1 Introduction............................................................................................................................................................... 59
2.4.2 Specifications.............................................................................................................................................................59
2.4.3 Feature Updates......................................................................................................................................................... 60
2.4.4 Feature Dependencies and Limitations......................................................................................................................60
2.4.5 Planning Guidelines...................................................................................................................................................62
2.5 AMAC.......................................................................................................................................................................... 62
2.5.1 Introduction............................................................................................................................................................... 62
2.5.2 Specifications.............................................................................................................................................................65
2.5.3 Feature Updates......................................................................................................................................................... 67
2.5.4 Feature Dependencies and Limitations......................................................................................................................67
2.5.5 Planning Guidelines...................................................................................................................................................68
3 Ethernet Features......................................................................................................................... 70
3.1 QinQ............................................................................................................................................................................. 71
3.1.1 Introduction............................................................................................................................................................... 71
3.1.2 Reference Standards and Protocols........................................................................................................................... 74
3.1.3 Specifications.............................................................................................................................................................74
3.1.4 Feature Updates......................................................................................................................................................... 75
3.1.5 Feature Dependencies and Limitations......................................................................................................................75
3.1.6 Planning Guidelines...................................................................................................................................................75
3.2 Ethernet Ring Protection Switching............................................................................................................................. 76
3.2.1 Introduction............................................................................................................................................................... 76
3.6.3 Specifications...........................................................................................................................................................133
3.6.4 Feature Updates....................................................................................................................................................... 134
3.6.5 Feature Dependencies and Limitations....................................................................................................................135
3.6.6 Planning Guidelines.................................................................................................................................................135
3.7 Bandwidth Notification.............................................................................................................................................. 136
3.7.1 Introduction............................................................................................................................................................. 137
3.7.2 Principles................................................................................................................................................................. 141
3.7.3 Reference Standards and Protocols......................................................................................................................... 142
3.7.4 Specifications...........................................................................................................................................................142
3.7.5 Feature Updates....................................................................................................................................................... 143
3.7.6 Feature Dependencies and Limitations....................................................................................................................144
3.7.7 Planning Guidelines.................................................................................................................................................145
4.2.2.7 DM........................................................................................................................................................................179
4.2.2.8 CSF....................................................................................................................................................................... 180
4.2.2.9 LCK...................................................................................................................................................................... 181
4.2.2.10 TST..................................................................................................................................................................... 183
4.2.2.11 Smooth Upgrade from MPLS OAM to MPLS-TP OAM...................................................................................184
4.2.3 Reference Standards and Protocols......................................................................................................................... 185
4.2.4 Specifications...........................................................................................................................................................186
4.2.5 Feature Updates....................................................................................................................................................... 187
4.2.6 Feature Dependencies and Limitations....................................................................................................................187
4.2.7 Planning Guidelines.................................................................................................................................................188
4.2.8 FAQs........................................................................................................................................................................ 189
4.3 MPLS APS................................................................................................................................................................. 189
4.3.1 Introduction............................................................................................................................................................. 189
4.3.1.1 Introduction to MPLS APS...................................................................................................................................189
4.3.1.2 Protection Type.....................................................................................................................................................190
4.3.1.3 Switching Conditions........................................................................................................................................... 191
4.3.1.4 Switching Impact.................................................................................................................................................. 193
4.3.2 Principles................................................................................................................................................................. 193
4.3.2.1 Single-Ended Switching....................................................................................................................................... 193
4.3.2.2 Dual-Ended Switching..........................................................................................................................................195
4.3.3 Reference Standards and Protocols......................................................................................................................... 196
4.3.4 Specifications...........................................................................................................................................................196
4.3.5 Feature Updates....................................................................................................................................................... 197
4.3.6 Feature Dependencies and Limitations....................................................................................................................197
4.3.7 Planning Guidelines.................................................................................................................................................198
4.3.8 FAQs........................................................................................................................................................................ 198
This part describes data communication networks (DCNs) and various DCN solutions
supported by OptiX RTN 380A/380AX.
DCN Composition
The DCN contains two types of node: NMS and NE. The DCN between the NMS and NEs is
called external DCN. The DCN among NEs is called internal DCN. The external DCN
consists of data communication devices, such as Ethernet switches and routers. The internal
DCN consists of NEs that are connected using DCN channels. Unless otherwise specified, the
DCN mentioned in this document refers to internal DCN.
DCN Channel
DCN channels fall into two types: outband DCN channel and inband DCN channel.
l Oubtband DCN channels do not occupy any service bandwidth. The RTN 300 supports
two types of outband DCN channel:
– D1 to D3 bytes in microwave frames
– Channels over NMS ports
l Inband DCN channels occupy some service bandwidth. The RTN 300 supports two types
of inband DCN channel:
– Some Ethernet service bandwidth of microwave links
– Some Ethernet service bandwidth of Ethernet links
DCN Solutions
The RTN 300 provides the following DCN solutions:
l IP DCN solution
In the IP DCN solution, network management messages are encapsulated into IP packets.
NEs forward the IP packets based on the IP addresses contained in them. This solution
supports a maximum of 200 NEs and ensures high network stability. This solution is the
default and preferred solution.
l L2 DCN solution
In the L2 DCN solution, network management messages are encapsulated into IP
packets, which are carried by Ethernet frames. NEs forward the Ethernet frames based
on the MAC addresses contained in them. This solution supports a maximum of 120
NEs. However, this solution has the risk of broadcast packet flooding and provides poor
network stability.
The RTN 300 also supports the HWECC solution, which is eliminated gradually.
Non-gateway NE: The application layer of the NMS communications with the application
layer of a non-gateway NE through the application layer of a gateway NE. The NEs between
the gateway NE and non-gateway NE forward DCN packets at L2 or L3.
DCN Flags
An NE on the DCN must be configured with two DCN flags: NE ID and NE IP address.
1.2.1 Introduction
This section describes the basic knowledge about IP DCN.
l Layer 1 of the protocol stack is the physical layer, which provides data transmission
channels for data terminal equipment. The RTN 300 provides the following DCN
channels:
– NMS port: all the bandwidth at the NMS port
– DCC channel: three Huawei-defined DCC bytes in a microwave frame at a
microwave port
– Inband DCN: a portion of Ethernet service bandwidth at an Ethernet or a
microwave port
l Layer 2 is the data link layer, which provides reliable data transmission to the physical
link layer. DCCs and inband DCNs use the PPP protocol to set up data links. Therefore,
IP addresses of adjacent NEs do not need to be in the same IP network segment.
l Layer 3 is the network layer, which specifies the network layer address for a network
entity and provides forwarding and addressing functions. NEs implement network layer
functions using the IP protocol. The routes used for IP forwarding can be direct routes
discovered by link layer protocols, manually configured static routes, or dynamic routes
generated by the OSPF protocol. The RTN 300 provides various OSPF features; for
details, see 1.2.3 Specifications.
l Layer 4 is the transport layer, which provides end-to-end communication services for the
upper layer. NEs support the TCP/UDP protocol.
and the communication protocol stack of the transit NE. Because the transit NE uses the
IP protocol stack, the gateway NE transfers the packets to the transit NE through the IP
protocol stack.
4. The network layer of the transit NE queries the destination IP address of the packets. If
the address is not the transit NE's address, the transit NE queries the IP routing table to
obtain the route to the destination NE and then transfers the packets.
5. The network layer of the destination NE passes the packets to its application layer
through the transport layer only if the destination IP address of the packets is the same as
the IP address of the destination NE. The application layer then processes the packets.
When the third-party L2 network is located between two networks comprised of RTN 300s,
NMS messages are encapsulated as L2 services for transmission. In this example, the access
control function is enabled on the Ethernet ports of the two networks for connecting to the
third-party L2 network and their IP addresses are in the same network segment.
The third-party L2 network creates a dedicated L2VPN service for the DCN packets carrying
a specific inband DCN VLAN ID.
1.2.3 Specifications
This section provides the IP data communication network (DCN) specifications that RTN
380A/380AX supports.
Item Specifications
Item Specifications
Maximum number of 4
areas supported by an
ABR
Maximum number of 30
areas on an entire
network
Maximum number of 4
networks in an area
Item Specifications
NOTE
If the DCN is too large or contains more than the maximum number of NEs, the NEs fails to process all
packets and the DCN becomes unstable.
If the DCN is overloaded, the following faults can occur:
l Some NEs are warm reset or unreachable for the NMS when the network undergoes link flaps or NE
resets.
l DCN channel bandwidth is occupied and NE management performance deteriorates when the
network undergoes a large volume of traffic (generated from the likes of software loading or
frequent data queries).
Self-limitations
Ethernet ports interconnecting with the If an Ethernet port interconnects with the
DCN through an L2 network DCN through an L2 network, access control
must be enabled for the Ethernet port.
Besides, IP addresses of interconnected
ports at both sides of the intermediate L2
network must be in the same network
segment. If no DCN packet is transmitted
through the Ethernet port, disable inband
DCN channels and access control for the
port.
Ethernet electrical service ports configured When an Ethernet electrical service port is
as NMS ports configured as an NMS port, no Ethernet
service or protocol can be configured at this
port, and the access control and QoS
functions configured at this port become
ineffective.
Table 1-3 Dependencies and Limitations Between IP DCN and Other Features
Feature Description
NOTE
In the planning guidelines, OptiX equipment refers to Huawei OptiX transmission equipment that
supports IP DCN.
l The IP addresses of the NEs connected through network management system (NMS)
ports should be on the same network segment.
l When a network uses multiple Open Shortest Path First (OSPF) areas, plan the NE IP
addresses as follows:
– Plan the NE IP address of an area border router (ABR) by considering the ABR as a
backbone NE.
– Ensure that the IP addresses of NEs in different areas (including backbone and non-
backbone areas) are on different network segments.
– If possible, ensure that the IP addresses of NEs in the same area are on the same
network segment. If special NE IP addresses are required, the IP addresses of NEs
in the same area can belong to different network segments.
– Ensure that the packet timer and router ID use their default values.
– On the NE connected to the external DCN, configure a static route to the NMS and
enable static route flooding.
– If none of the networks configured for all areas overlap, enable automatic route
aggregation to decrease the number of routing table entries. Alternatively, manually
aggregate some network segments that can be aggregated.
Guidelines for planning NE IP addresses and routes in typical network topologies are
described in the following section.
Network Comprising Only OptiX NEs, with the IP Addresses of the NMS and
Gateway NE on the Same Network Segment
Figure 1-1 illustrates a network comprising only OptiX NEs. On the network, the IP
addresses of the network management system (NMS) and gateway NE are on the same
network segment.
Figure 1-1 Diagram for planning NE IP addresses and routes (a network comprising only
OptiX NEs, with the IP addresses of the NMS and gateway NE on the same network segment)
NMS NE 1 NE 2 NE 3 NE 4
130.9.0.100
130.9.0.1 129.9.0.2 129.9.0.3 129.9.0.4
In Figure 1-1:
l The IP address of the gateway NE (NE 1) belongs to the network segment 130.9.0.0, and
the IP addresses of the non-gateway NEs belong to the segment 129.9.0.0.
Network Comprising Only OptiX NEs, with the IP Addresses of the NMS and
Gateway NE on Different Network Segments
Figure 1-2 illustrates a network comprising only OptiX NEs. On the network, the IP
addresses of the NMS and gateway NE are on different network segments.
Figure 1-2 Diagram for planning NE IP addresses and routes (a network comprising only
OptiX NEs, with the IP addresses of the NMS and gateway NE on different network
segments)
NMS
10.2.0.200
RT 1
10.2.0.100
NE 1 NE 2 NE 3 NE 4
RT 2
130.9.0.100 130.9.0.1 129.9.0.2 129.9.0.3 129.9.0.4
In Figure 1-2:
l The IP address of the gateway NE (NE 1) belongs to the network segment 130.9.0.0, and
the IP addresses of the non-gateway NEs belong to the segment 129.9.0.0.
l On NE 1, configure a static route to the NMS (10.2.0.100), or set the IP address of RT 2
(130.9.0.100) as the default gateway.
l On the NMS, configure a static route to NE 1 (130.9.0.1), or set the IP address of RT 1
(10.2.0.200) as the default gateway.
l If the NMS requests direct access to a non-gateway NE (NE 2, NE 3, or NE 4), perform
the following configurations in addition to the preceding ones:
– On NE 1, enable Open Shortest Path First (OSPF) route flooding, so that NE 2, NE
3, and NE 4 can obtain routes to the NMS.
– On the NMS, configure a static route to the network segment 129.9.0.0. Skip this
operation if the default gateway has been configured.
– Configure routes from RT 1 and RT 2 to the network segment 129.9.0.0.
1.3.1 Introduction
This section describes the basic information about the Layer 2 data communication network
(L2 DCN) solution.
l Layer 1 of the protocol stack is the physical layer, which provides physical channels for
transmitting data between data terminal equipment. RTN 300 provides the following
DCN channels:
– NMS port: transmitting DCN packets using all of its bandwidth
– DCC channel on a microwave port: transmitting DCN packets using the three self-
defined DCC bytes in a microwave frame
– Inband DCN channel on an Ethernet or microwave port: transmitting DCN packets
using part of Ethernet bandwidth
l Layer 2 is the data link layer, which provides reliable data transmission to the physical
link layer. The L2 DCN solution implements the functions of the data link layer based on
MAC address learning and forwarding.
l Layer 3 is the network layer, which performs addressing and packet forwarding. NEs run
the IP protocol to provide functions of the network layer.
When the NMS and the NE are in the different network segments, the process of DCN packet
forwarding is as follows:
Figure 1-3 Using the LAG consisting of service ports to traverse third-party devices for
transmitting DCN information between an NE and the NMS
Figure 1-4 Using the LAG consisting of service ports to traverse third-party devices for
transmitting DCN information between NEs
In L2 DCN scenarios, the OptiX RTN 380A/380AX cannot interwork with third-party devices
over RSTP. Therefore, set the DCN protocol type for service ports in an LAG group to SL2
DCN. After this setting, only a port in an LAG group is used to send inband DCN packets.
NOTE
l If the LAG is not mandatory, use the NMS interface of the Ethernet to traverse third-party devices.
l You are advised to use L2 DCN when RTN NEs interwork with each other using service ports and use
SL2 DCN when an RTN NE interworks with third-party NEs using service ports.
1.3.3 Specifications
This section provides the L2 DCN specifications that OptiX RTN 380A/380AX supports.
Table 1-4 Specifications of the L2 DCN solution that OptiX RTN 380A/380AX supports
Item Specifications
Outband DCN Channel type l Microwave port: 3 bytes DCC channel (D1-
D3)
Item Specifications
Maximum frame length supported in 1522 bytes (maximum valid payloads: 1500
L2 DCN forwarding bytes)
Type of entries in a MAC address table Dynamic entries are supported. Static entries are
not supported.
NOTE
If the DCN is too large or contains more than the maximum number of NEs, the NEs fails to process all
packets and the DCN becomes unstable.
If the DCN is overloaded, the following faults can occur:
l Some NEs are warm reset or unreachable for the NMS when the network undergoes link flaps or NE
resets.
l DCN channel bandwidth is occupied and NE management performance deteriorates when the
network undergoes a large volume of traffic (generated from the likes of software loading or
frequent data queries).
Self-limitations
Software loading to an L2 DCN network When being loaded with software, NEs on
an L2 DCN network can be loaded only one
by one instead of in diffusion mode.
Ethernet service ports configured as NMS When an Ethernet electrical service port is
ports configured as an NMS port, no Ethernet
service or protocol can be configured at this
port, and the access control and QoS
functions configured at this port become
ineffective.
Table 1-6 Dependencies and Limitations Between L2 DCN and Other Features
Feature Description
You are advised to use L2 DCN when RTN NEs interwork with each other using service
ports. If an RTN NE interworks with third-party NEs using service ports, SL2 DCN is
recommended and no loops exist on physical networks.
1.4 LLDP
The OptiX RTN 380A/380AX and user equipment run the Link Layer Discovery Protocol
(LLDP) to quickly diagnose service faults.
1.4.1 Introduction
This section introduces the Link Layer Discovery Protocol (LLDP).
Definition
LLDP is a link layer communication protocol defined in IEEE 802.1AB. LLDP allows a piece
of equipment attached to an Ethernet to advertise information to its adjacent equipment
attached to the same Ethernet. This information includes its major capabilities, management
address, equipment ID, and port IDs. The recipients store the information in standard
management information bases (MIBs), accessible by a network management system (NMS).
Application (1)
When the RTN 300 is connected to a Huawei base station directly or through a PI, LLDP is
enabled on both of them so they can obtain equipment information about each other. The
U2000-M that manages base stations can display the connections between base stations and
RTN equipment in a topology diagram based on the RTN equipment information transferred
through LLDP. When a service fault occurs, the U2000-M can check the interconnection
parameters of the equipment at both ends, which facilitates quick fault locating. The U2000-T
that manages the RTN 300 can also implement similar functions.
Application (2)
When the RTN 300 is connected to a Huawei ATN/CX directly or through a PI, LLDP is
enabled on both of them so they can obtain equipment information about each other. The
U2000-T that manages equipment can establish port connection relationships between
equipment based on the equipment information transferred through LLDP and then create
fibers/cables between equipment in a topology diagram.
1.4.3 Specifications
This section lists the LLDP specifications that OptiX RTN 380A/380AX supports.
Item Specifications
Time Man - Time to live (TTL) that tells the recipient how long all
To dato information pertaining to this LLDPDU is valid
Live ry NOTE
When a port switches to a mode where LLDP packets cannot be
transmitted, it sends LLDP packets (shutdown packets) with a TTL
value of 0 to notify the recipient that any information pertaining to
this LLDPDU is invalid.
IEEE Opti 01: port Default port VLAN ID Huawei NodeBs only
802.1 onal VLAN ID receive and parse this
Organ parameter.
izatio
nally Opti 02: port and Huawei OptiX RTNs only Huawei NodeBs only
Specif onal protocol receive and parse this receive and parse this
ic VLAN ID parameter. parameter.
TLVs Opti 06: Huawei OptiX RTNs only OMCH VLAN ID
onal management receive and parse this
VID parameter.
Self-limitations
LLDP networking Because a port multicast address supports only the nearest
bridge, the product and user equipment must be directly
connected or connected through a power injector (PI).
NOTE
If the product and user equipment are connected through a Ethernet
switch or other data communication equipment, LLDP packets
cannot be exchanged between base stations and the product.
Table 1-10 Dependencies and limitations between LLDP and other features
Feature Description
1.5 Anti-Theft
Anti-theft allows a user to create a pair of public and private keys for locking and unlocking a
device. It effectively prevents unauthorized users from using devices.
1.5.1 Introduction
This section describes basics about anti-theft function.
Anti-theft allows a user to create a pair of public and private keys using the U2000 for locking
and unlocking a device. The public key is loaded on to the device for enabling anti-theft
function and the private key is used to generate an authentication file for unlocking the
device. In this way, the device is locked and can be used only if it is being unlocked using the
matched private key. Anti-theft effectively prevents unauthorized users from using devices.
Authentication Modes
Authentication enabling and device unlocking are available in online mode and offline mode.
Online mode
When the DCN communication is normal between a device and the U2000, a user can use the
U2000 to load a public key onto the online device for enabling anti-theft function and load an
authentication file for unlocking the device, as shown in Figure 1-6. After being enabled,
anti-theft takes effect permanently until a correct authentication file is loaded. If an unlocked
device resets, anti-theft automatically restarts.
Offline mode
When the DCN communication is abnormal between a device and the U2000, a user can use
the Web LCT to load a public key onto the offline device for enabling anti-theft function and
load an authentication file for unlocking the device, as shown in Figure 1-7. After being
enabled, anti-theft takes effect permanently until a correct authentication file is loaded. After
being unlocked in offline mode, a device keeps unlocked within the authorized period. After
the authorized period expires, anti-theft automatically restarts.
Authentication Mechanism
Anti-theft uses the configuration control policy and service control policy to limit the use of
devices.
Configuration control policy
After a device that has anti-theft function enabled resets, anti-theft takes effect and
configuration data on the device cannot be modified.
Service control policy
In scenarios where a device has anti-theft function enabled:
l After the device resets, it can be used within a customized period (seven days by
default). When the period arrives, anti-theft takes effect. After that, the air-interface
service bandwidth is only 10 Mbit/s, services are interrupted, and only DCN
communication is normal.
l After the device becomes unreachable, it can be used within a customized period (seven
days by default). When the period arrives, anti-theft takes effect. After that, the air-
interface service bandwidth is only 10 Mbit/s and services are interrupted. When DCN
communication returns to normal, services are automatically restored.
1.5.3 Specifications
This section lists the anti-theft specifications that the OptiX RTN 380A/380AX supports.
Table 1-11 Anti-theft specifications that the OptiX RTN 380A/380AX supports
Item Specifications
Self-limitations
None
2 Microwave Features
This part describes the microwave features supported by OptiX RTN 380A/380AX.
2.1.1 Introduction
1+1 HSB protection improves reliability of microwave links.
Definition
1+1 HSB is a 1+1 protection mode. In 1+1 HSB mode, devices form a 1+1 hot standby
configuration for protection.
A microwave system in 1+1 HSB configuration consists of the following parts: main channel,
standby channel, and service access unit transmitting services to or receiving services from
the main or standby channel. In normal cases, at the transmit end, the standby channel is
muted and services received by the service access unit are sent to the receive end through the
main channel; at the receive end, both the main and standby channels receive services but the
service access unit receives services only from the main channel. If the main channel at one
end (such as the main channel on the right) fails, the main channel is muted and the standby
channel is unmuted. The services are then sent through the standby channel, and the service
access unit receives services from the standby channel.
When two RTN 300s are configured in a 1+1 HSB system for one end, they serve as main and
standby channels and other cooperating devices serve as service access units. To negotiate the
main/standby status of microwave channels, two RTN 300s must communicate through
cascade ports.
System Configuration
Depending on the type of device used as the service access unit, a 1+1 HSB system
containing RTN 300s can be optical splitter mode or LAG mode.
In the example, one optical splitter, two RTN 300s, and one single-polarized antenna
with a hybrid coupler form a 1+1 HSB system.
The optical splitter can dually send signals but receive one channel of optical signals at
one time. To ensure that services are normally transmitted and received, the access port
on the current standby RTN 300 is always disabled.
– When the main RTN 300 is the active one, its access port is enabled but the access
port on the standby RTN 300 is disabled.
– When the standby RTN 300 changes to the active one, its access port is enabled but
the access port on the main RTN 300 is disabled.
In the example, protection is provided for one channel of GE optical services received.
Protection can be provided for multiple channels of GE optical services by increasing the
number of access links. To provide protection for multiple channels of GE optical
services, an optical splitter must be configured for each channel of services.
l LAG mode: IDUs or customer devices supporting LACP serve as service access units.
In the example, one IDU serving as the service access unit, two RTN 300s, and one
single-polarized antenna with a hybrid coupler form a 1+1 HSB system.
The IDU forms LAGs with the main and standby RTN 300s, so that the IDU always
transmits services to or receives services from the current active RTN 300.
– When the main RTN 300 is the active one, the system priority of the LAG
configured on it changes to the highest. Therefore, the LAG set up between the IDU
and the main RTN 300 works.
– When the standby RTN 300 changes to active one, the system priority of the LAG
configured on it changes to the highest. Therefore, the LAG set up between the IDU
and the standby RTN 300 works.
LAG for RTN 300s in a 1+1 HSB system is called E-LAG, for details see 3.3.2 E-LAG.
In the example, protection is provided for one channel of GE optical/electrical services
received. Protection can be provided for multiple channels of GE optical/electrical
services by increasing the number of access links. To provide protection for multiple
channels of GE optical/electrical services, an LAG must be set up for each channel of
services and LAG status switching must be performed simultaneously depending on the
main/standby status of RTN 300s.
2.1.2 Specifications
This section provides the 1+1 hot standby (HSB) specifications that OptiX RTN 380A/380AX
supports.
Table 2-1 1+1 HSB specifications that OptiX RTN 380A/380AX supports
Item Specifications
Self-limitations
Requirements for an E-LAG, cascade port, l An Ethernet port that participates in 1+1
and service port in scenarios where LAG protection must be configured in a static,
mode is used non-load sharing, non-revertive LAG
that contains only the Ethernet port.
l An E-LAG configured on the OptiX
RTN 380A/380AX must contain
member ports of the same type, optical
or electrical. Working Mode for all
member ports must be set to Auto-
Negotiation.
l The main and standby OptiX RTN
380A/380AXs are connected using 1+1
cascade ports. In this case, Ethernet
services cannot be configured on the 1+1
cascade ports. On the main/standby
OptiX RTN 380A/380AX, only the
Ethernet port that participates in 1+1
protection, specifically, the GE optical
port or P&E electrical port, can be
configured with services. If the other
Ethernet port is configured with Ethernet
services, 1+1 protection configuration
will fail.
l A LAG configured on an OptiX RTN
900 IDU or LACP-supporting UNI-side
device interconnected with OptiX RTN
380A/380AX must contain member
ports of the same type, optical or
electrical. Working Mode for all
member ports must be set to Auto-
Negotiation. The system priority of the
LAG must be greater than 1000.
Requirements for cascade ports and service l Optical ports (including GE or 10GE
access in scenarios where optical splitter optical ports but excluding cascading or
mode is used other optical ports) can be configured
with services only when participating in
1+1 protection.
l Service ports configured with 1+1
protection in optical splitter mode can
work only in auto-negotiation mode.
Table 2-3 Dependencies and Limitations Between 1+1 HSB and Other Features
Feature Description
NOTE
This feature is supported only by the RTN 380AX.
2.2.1 Introduction
This section defines cross polarization interference cancellation (XPIC) and describes its
purpose.
Channel Configuration
Microwave transmissions can be classified into single-polarized transmissions and CCDP
(Co-Channel Dual Polarization) transmissions, based on the polarization modes.
l In CCDP transmission, two channels of signals of the same frequency are transmitted
over the horizontally polarized wave and the vertically polarized wave on a channel.
If conditions were perfect, there would be no interference between the two channels of
signals, and the receiver could easily recover the original signals. In reality, however,
there is always interference caused by antenna cross-polarization discrimination (XPD)
and channel deterioration.
In short-haul transmission, there is low cross polarization interference, and the receiver
can directly demodulate signals. In long-haul transmission, there is large cross
polarization interference, and consequently the mean squared error (MSE) decreases to a
value less than the demodulation threshold. In this case, the XPIC technology needs to be
used to recover original signals.
XPIC
If the co-channel dual-polarization (CCDP) technology is used in channel configuration, the
XPIC technology can be used to eliminate interference between two electromagnetic waves.
The transmitter sends two orthogonally polarized electromagnetic waves to the receiver over
the same channel. The receiver then recovers the original two channels of signals after XPIC
eliminates interference between the two waves.
System Configuration
When one XPIC group is configured, two RTN 380AX NEs need to be configured on each
XPIC site. The XPIC ports of the two OptiX RTN 380H NEs are connected using an XPIC
cable. XPIC signals are transmitted over this XPIC cable. In addition, the two NEs are
cascaded by COMBO ports or GE ports to transmit clock signals and XPIC management
control signals.
Figure 2-7 Typical XPIC configuration (one dual-polarized antenna mounted with an OMT)
2.2.2 Specifications
This section lists the cross polarization interference cancellation (XPIC) specifications that the
RTN 380A/380AX supports.
XPIC specifications
Table 2-4 lists XPIC specifications that the RTN 380A/380AX supports.
Item Specifications
Supported bandwidth 125 MHz, 250 MHz, 500 MHz, 750 MHz, 1.0 GHz, 1.5
GHz, 2GHz
Item Specifications
For the service throughput of an NE with XPIC enabled, see Radio Working Modes and
Service Capacities.
Self-limitations
Item Description
COMBO port or GE(e) port COMBO port or GE(e) port can be used to cascade adjacent
NEs in an XPIC group and transmit XPIC management
signals and clock signals.
Transmit power The maximum transmit power for two neighboring NEs in
an XPIC group must be consistent.
Table 2-6 Dependencies and Limitations Between XPIC and Other Features
Feature Description
LAG XPIC can work with link aggregation group (LAG), but you
must create a LAG manually.
PLA When XPIC works with PLA, XPIC cascading signals and
PLA cascading signals can share the same cable.
l XPIC must be enabled when co-channel dual polarization (CCDP) is used in channel
configuration.
l XPIC can be planned if ultra-high bandwidth (higher than 10G) is required or PLA is
configured.
l The following parameters must be set to the same values for both horizontal and vertical
polarization links in an XPIC group:
– Transmit frequency
– Transmit power
– T/R spacing
– Automatic transmit power control (ATPC) status (enabled or disabled)
– ATPC adjustment thresholds
– Channel spacing
– Modulation scheme
– Adaptive modulation (AM) status (enabled or disabled)
– Modulation scheme of guaranteed AM capacity
– Modulation scheme of full AM capacity
2.3 PLA
This chapter describes physical link aggregation (PLA). PLA aggregates links and
implements load sharing over these links based on physical-layer bandwidths. PLA
effectively improves bandwidth utilization and reliability for transmitting Ethernet services
over microwave links.
2.3.1 Introduction
This section defines physical link aggregation (PLA) and describes its purpose.
PLA
PLA aggregates multiple microwave links and implements load sharing among these links
based on physical-layer bandwidths.
PLA does not depend on MAC addresses (L2 service flows) or IP addresses (L3 service
flows). Therefore, PLA is also called L1 LAG.
In addition to load sharing, PLA provides protection for member links. If a member link is
interrupted, services on the link are scheduled to other available links based on priorities,
which ensures the transmission of high-priority services.
equivalent Ethernet bandwidth utilization between member microwave links even when the
Ethernet bandwidths of the member microwave links change.
System Configuration
Two RTN 380A/380AXs can be cascaded to form a PLA group.
The master and slave NEs in a PLA group must be specified and use a service port for PLA
signal cascading. For an RTN 380A/380AX, the GE optical port, GE electrical port, and
COMBO port can be used to cascade PLA signals.
Based on service access modes, three PLA system configuration modes are available:
l Service access through a single NE
Only the master NE is used for service access. This configuration mode provides
protection for microwave links but not for equipment.
– This configuration mode requires the service source (an IDU or user equipment) to
provide two access ports and support static LAG. The LAG of the service source
and the E-LAG on the RTN 380A/380AXs can cooperate to implement protection
switching for both equipment and access links.
Switching Conditions
The following service faults will trigger PLA switching:
l MW_LOF
l MW_RDI
l MW_BER_EXC
l MW_BER_SD
l MW_LIM
The following hardware faults will trigger PLA switching:
l HARD_BAD
l Cascade cable fault
l Service port fault
l Device power failure
l Cold device reset
2.3.2 Principles
Physical link aggregation (PLA) adjusts traffic allocation between member links based on the
Ethernet bandwidth provided by each member link. The principles for link protection
switching and equipment protection switching are different.
NOTE
This section describes the implementation principles of PLA with enhanced link aggregation group (E-
LAG) configured.
l The link protection switching principle in other configuration modes is similar to that in this
configuration mode.
l The E-LAG protocol is not required for equipment protection switching in the configuration mode of
service access through an optical splitter because the PLA protocol is used to enable or disable the
service port on the master and slave NEs.
l Equipment protection switching is not supported in the configuration mode of service access through
a single NE.
NOTE
The Ethernet service signals transmitted between the master and slave NEs include PLA
packets, inband data communication network (DCN) packets, and communication protocol
packets. The communication protocol provides the following functions:
l Sets up a heartbeat connection so that an NE can quickly obtain status information about
the communication with the other NE in the same PLA group.
l Transmits NE status and Ethernet bandwidth information so that PLA modules can
adjust traffic allocation in a timely manner.
l Transmits PLA configuration information so that an NE can check the configuration
consistency between itself and the other NE in the same PLA group.
P M M P
L U U L
GE GE
A X X A
Antenna Antenna
GE GE
IDU IDU
P M M P
L U U L
A X X A
Ethernet service
As shown in Figure 2-15, when the master NE detects that the microwave link fails, the PLA
module on the master NE stops transmitting service signals to the microwave port and
transmits service signals only to the slave NE.
When the microwave link recovers, the PLA module automatically starts transmitting
Ethernet service signals over both the master and slave links.
P M M P
L U U L
GE GE
A X X A
Antenna Antenna
GE GE
IDU IDU
P M M P
GE L U U L GE
A X X A
Ethernet service
NOTE
Failure of the slave NE triggers link protection switching, but not equipment protection switching.
As shown in Figure 2-16, when the slave NE detects an NB_UNREACHABLE alarm and
receives a message from the remote NE indicating that the master link is faulty, the equipment
protection switchover is triggered. After the switchover, the LAG system priority of the slave
NE changes to the highest. Therefore, the IDU where the service source is located sends
Ethernet services to the service port of the slave NE. The PLA module on the slave NE sends
the received Ethernet service signals to the MUX unit through the microwave port. The MUX
unit adds overheads to the Ethernet service signals to form microwave frames. Then, the
modem unit modulates the microwave frames and sends them to the FO.
When the master NE recovers, no revertive switching occurs. The PLA module on the slave
NE allocates and schedules services to the master and slave NEs based on the traffic
balancing algorithm, as shown in Figure 2-17.
P M M P
L U U L
GE GE
A X X A
Antenna Antenna
GE GE
IDU IDU
P M M P
GE L U U L GE
A X X A
Figure 2-17 Equipment protection switching principles (after the fault is rectified)
Master NE Master NE
P M M P
L U U L
GE GE
A X X A
Antenna Antenna
GE GE
IDU IDU
P M M P
GE L U U L GE
A X X A
Ethernet service
2.3.3 Specifications
This section lists the physical link aggregation (PLA) specifications that RTN 380A/380AX
supports.
Self-limitations
Item Description
Master and slave NEs l When configuring PLA, ensure that the number of ports, ID
and type services ports on the master NE are the same as
those on the slave NE.
l When configuring PLA, ensure that the ID and type of the
cascade port on the master NE are the same as those on the
slave NE.
l Cascade ports on the master and slave NEs cannot be
configured with Ethernet services or protections.
Air-interface capacity The air-interface capacity over any two microwave links in a
PLA group cannot differ by more than a factor of 10.
Otherwise, services may be interrupted.
Table 2-9 Dependencies and Limitations Between PLA and Other Features
Item Description
Adaptive modulation PLA can coexist with AM. The member IF boards in a PLA
(AM) group can have the same or different Hybrid/AM attributes and
modulation schemes.
Data communication The slave microwave links in a PLA group cannot transmit
network (DCN) inband DCN messages. Therefore, enable outband DCN for
each member link when you are configuring PLA.
Cross polarization Two members of an XPIC group can form a PLA group, which
interference cancellation provides Ethernet service protection between the vertical and
(XPIC) horizontal polarization directions.
2.4.1 Introduction
This section introduces ATPC.
ATPC relies on the received signal level (RSL) of the receiver to adjust transmit power.
The ATPC feature enables a transmitter to automatically adjust its transmit power within the
ATPC control range based on the RSL of the receiver. RSL remains within a fixed range, and
the residual bit error rate (BER) and interference to neighbor systems are reduced.
When ATPC is enabled:
l If the RSL is at least 2 dB less than the value halfway between the ATPC upper and
lower thresholds, the receiver instructs the transmitter to increase transmit power so that
the RSL does not deviate more than 2 dB from the halfway value.
l If the RSL is at least 2 dB greater than the value halfway between the ATPC upper and
lower thresholds, the receiver instructs the transmitter to decrease transmit power so that
the RSL does not deviate more than 2 dB from the halfway value.
TSL
Up-fading
2.4.2 Specifications
This section lists the automatic transmit power control (ATPC) specifications that OptiX RTN
380A/380AX supports.
Self-limitations
Item Description
ATPC adjustment l The transmitter will not increase its transmit power if the
actual transmit power has reached the preset maximum
value.
l The value for maximum transmit power cannot be set
higher than the rated maximum transmit power of OptiX
RTN 380A/380AX.
l If no value is set for maximum transmit power, transmit
power will not increase beyond the rated maximum
transmit power of OptiX RTN 380A/380AX.
Table 2-12 Dependencies and Limitations Between ATPC and Other Features
Feature Description
2.5 AMAC
Adaptive modulation and adaptive channel spacing (AMAC) technology adjusts the
modulation scheme automatically based on channel quality, which includes adaptive
modulation (AM) and adaptive channel spacing (AC).
2.5.1 Introduction
This section introduces adaptive modulation (AM) and adaptive modulation and adaptive
channel space (AMAC).
AM
The AM function automatically adjusts the modulation scheme according to the quality of
channels. After AM is enabled, the radio service capacity varies according to modulation
scheme as long as channel spacing remains unchanged. The higher the modulation scheme,
the higher the transmitted service capacity.
l When channel quality is good (such as on clear days), the equipment uses a high-order
modulation scheme to allow transmission of more user services. This improves the
transmission and spectral efficiency of the system.
l When channel quality deteriorates (such as on stormy or foggy days), the equipment uses
a low-order modulation scheme so that higher-priority services are preferentially
transmitted using the available bandwidth. If certain lower-priority queues are congested
due to insufficient air interface capacity, some or all the services in these queues are
discarded. This improves the anti-interference capability of links and ensures link
availability for higher-priority services.
AM Principles
The AM function is implemented by the AM engine in a modem unit. This section uses the
AM downshift in one service direction as an example to describe how AM is implemented.
1. As shown in Figure 2-20, the MUX unit at the transmit end multiplexes services that are
scheduled to the microwave port into microwave frames. The microwave frames are then
transmitted to the receive end over the TX path.
2. The RX path at the receive end receives and processes IF signals and checks the signal-
to-noise ratio (SNR) of the signals.
NOTE
In the current modulation scheme, the quality of the received signals is considered poor if the
value of the SNR is lower than the preset threshold, and the quality of the received signals is
considered good if the SNR is higher than the preset threshold.
3. The RX path at the receive end transmits a signal indicating the quality of the received
signals to the local AM engine.
4. The AM engine sends a shift indication signal, which is contained in a microwave frame,
to the transmit end over the TX path.
5. When processing the received IF signals, the modem unit at the transmit end extracts the
shift indication signal and sends it to the local AM engine.
6. The AM engine sends the shift indication signal to the MUX unit, instructing the MUX
unit and modem unit to shift the modulation scheme after N frames are transmitted.
7. In addition, the transmit end inserts the shift indication signal into a microwave frame
transmitted to the receive end. After the receive end detects the shift indication signal,
the MUX unit and modem unit at the receive end also shift the modulation scheme after
N frames are received. In this manner, the modulation scheme is shifted at both the
transmit and receive ends based on the frame boundary.
After the downshift, the transmit end discards lower-priority Ethernet services based on the
bandwidth available for microwave frames and schedules higher-priority Ethernet services to
the microwave port. See Figure 2-21.
When detecting that the SNR of the received signals is higher than the threshold for triggering
a modulation scheme upshift, the modem unit at the receive end instructs the transmit end to
perform an upshift. After the upshift, the bandwidth for microwave frames increases, allowing
more Ethernet services to be transmitted.
AMAC
AMAC is the enhancement of AM. If the lowest-order modulation scheme is BPSK and
channel quality deteriorates continuously, the equipment can decrease the channel spacing to
reduce the impact of channel quality deterioration on service signals.
AMAC is usually used to protect microwave links against frequency selective fading. As
shown in Figure 2-23, when the modulation scheme shifts to BPSK, signal fading on a
frequency is high. Therefore, the equipment cannot recover the original service signals, and
the channel cannot transmit services normally. AMAC can decrease the channel spacing to
improve the spectrum curve of signal receiving. The equipment can recover the original
service signals, ensuring the normal transmission of high-priority services.
2.5.2 Specifications
This section lists the adaptive modulation (AM) specifications that RTN 380A/380AX
supports.
Item Specifications
Self-limitations
3 Ethernet Features
This section describes the Ethernet features that the OptiX RTN 380A/380AX supports.
3.1 QinQ
This chapter describes the 802.1Q in 802.1Q (QinQ) feature.
3.2 Ethernet Ring Protection Switching
Ethernet ring protection switching (ERPS), which is applicable to ring physical networks,
protects E-LAN services on an Ethernet ring.
3.3 Link Aggregation Group
This chapter describes link aggregation group (LAG). In a LAG, multiple links to the same
device are aggregated to work as a logical link. This helps to increase bandwidth and improve
link reliability.
3.4 QoS
This section describes quality of service (QoS). QoS provides different levels of service
quality in certain aspects of services as required, such as bandwidth, delay, jitter, and packet
loss ratio. This ensures that the request and response of a user or application reaches an
expected quality level.
3.5 HQoS
Hierarchical quality of service (HQoS) offers a multi-level queue scheduling mechanism for
the DiffServ (DS) model to guarantee bandwidth for multiple services of different users.
3.6 ETH OAM
ETH OAM detects and monitors the connectivity and performance of service links using
OAM protocol data units (PDUs). ETH OAM does not affect services.
3.7 Bandwidth Notification
When interconnecting with a Huawei ATN or CX router, the OptiX RTN 380A/380AX uses
the bandwidth notification function to inform the router of its air-interface bandwidth
changes, and the router performs quality of service (QoS) processing accordingly.
3.1 QinQ
This chapter describes the 802.1Q in 802.1Q (QinQ) feature.
3.1.1 Introduction
This section introduces 802.1Q in 802.1Q (QinQ).
Definition
QinQ is a Layer 2 tunnel protocol based on IEEE 802.1Q encapsulation. The QinQ
technology encapsulates a private virtual local area network (VLAN) tag into a public VLAN
tag. Packets carrying two VLAN tags are transmitted on the backbone network of an operator.
QinQ provides Layer 2 virtual private network (VPN) tunnels.
The inner VLAN tag is a customer VLAN (C-VLAN) tag and the outer VLAN is a supplier
VLAN (S-VLAN) tag.
Benefits
The QinQ technology brings the following benefits:
The default TPID of an S-TAG is 0x88A8 and the TPID can be modified according to the
requirement. In addition, a field indicating the S-TAG frame priority is added.
Before being transmitted from a user network to an operator network, Ethernet packets may
be untagged frames or tagged frames. When the Ethernet packets are transmitted within the
operator network, they carry only S-TAGs or a combination of C-TAGs and S-TAGs.
Application
When Ethernet packets are transmitted from a user network to an operator network, S-TAGs
are added to the packets based on PORT or PORT+C-VLAN and then these packets are
forwarded based on S-VLAN tags (carried by E-Line services) or S-VLAN tags. Swapping of
S-VLAN tags is allowed when E-Line services are created to carry Ethernet packets.
The Ethernet packets are transmitted to the user network from the operator network after their
S-VLAN tags are removed.
IEEE 802.1ad: Virtual Bridged Local Area Networks Amendment 4: Provider Bridges
3.1.3 Specifications
This section provides the QinQ specifications that OptiX RTN 380A/380AX supports.
Item Specifications
Setting of the QinQ type field Supported, with the default value being
0x88A8
QinQ operation type (QinQ-based E-Line Adding S-VLAN tags (from a UNI to an
services) NNI)
Stripping S-VLAN tags (from an NNI to a
UNI)
Swapping S-VLAN tags (from a UNI to a
UNI, from an NNI to an NNI)
QinQ operation type (802.1ad bridge-based Adding S-VLAN tags based on PORT (UNI
E-LAN services) port)
Adding S-VLAN tags based on PORT+C-
VLAN (UNI port)
Mounting ports based on PORT+S-VLAN
(NNI port)
Self-limitations
Item Description
l Plan S-VLANs and QinQ service type (E-Line or E-LAN) based on service
requirements.
l Set the same QinQ type field for the ports at both ends of a QinQ link (transmitting
Ethernet packets with S-VLAN IDs). The value 0x88A8 is recommended.
3.2.1 Introduction
This section introduces Ethernet ring protection switching (ERPS).
Definition
ERPS refers to the automatic protection switching (APS) protocol and protection switching
mechanisms for Ethernet rings. ERPS is applicable to Layer 2 Ethernet ring topologies, and
provides protection for E-LAN services on an Ethernet ring.
OptiX RTN 380A/380AX supports ERPS V1 and ERPS V2, which protect Ethernet services
on single-ring networks and multi-ring networks. In this example, Ethernet service protection
in single-ring networking is described.
When a ring network is configured with ERPS, under normal conditions, the RPL owner
blocks the port on a certain side so that all the services are transmitted through the port on the
other side. In this manner, service loops can be prevented. If a ring link or a ring node fails,
the RPL owner unblocks the preceding port and the services that cannot be transmitted over
the faulty point can be transmitted through this port. In this manner, ring protection is
achieved.
ERP Instance
An ERP instance is the basic unit of ERPS. An ERP instance defines the ring links, ring
protection link (RPL), RPL owner node, control VLAN, destination MAC addresses, and east/
west ring ports.
l An RPL is the ring link on which traffic is blocked during Idle conditions. Only one RPL
is defined on an Ethernet ring.
l An RPL owner node is a ring node at one end of the RPL. When an Ethernet ring is in
the normal state, the RPL port on the RPL owner node is blocked to prevent the service
channels from forming loops. Only one RPL owner node can exist on an Ethernet ring
network.
l A ring port is a link connection point on a ring node. A ring port can be an Ethernet port
or a microwave port.
l The following figure is an example of a ring network. Generally, in the counter-
clockwise direction and on a ring node, the ring port that transmits services is the east
ring port and the ring port that receives services is the west ring port.
l A ring-APS (R-APS) message is a request message for Ethernet ring protection
switching (ERPS) and link status.
l An R-APS message contains a fixed destination MAC address 01-19-A7-00-00-01.
l The VLAN ID carried by an R-APS message, which is different from the VLAN IDs of
Ethernet services, separates the message from Ethernet services.
After the non-RPL link recovers, the ring restores to the normal state.
l When an RPL link fails, ERPS transmits R-APS (SF, DNF) messages to inform all ring
nodes of the failure and prevent them from flushing their FDBs. However, service
transmission is not affected.
After the RPL link recovers, the ring restores to the normal state.
3.2.3 Specifications
This section lists the Ethernet ring protection switching (ERPS) specifications that OptiX
RTN 380A/380AX supports.
Table 3-3 and Table 3-4 list the ERPS specifications.
ERP instance RPL owner node An ERP ring has only one RPL owner
node. A ring node next to the RPL owner
RPL neighbor node node can be the RPL neighbor node.
Item Specifications
Self-limitations
Hybrid networking of nodes Nodes supporting ERPSv2 and nodes supporting only
supporting only ERPSv1 ERPSv1 can form an ERPS-capable single ring network. In
and nodes supporting this scenario, the RPL owner must be a node that supports
ERPSv2 ERPSv2 and all ERPS instances on the ring must use
ERPSv1.
Table 3-6 Dependencies and Limitations Between ERPS and Other Features
Feature Description
l 1+1 HSB
l ETH PWE3
3.2.6.1 ERPS V1
ERPS V1 or ERPS V2 can be deployed on a single-ring network to protect Ethernet services.
ERPS V1 is used as an example here.
3.2.6.2 ERPS V2
ERPS V2 can be deployed to protect Ethernet services on rings on a multi-ring network.
l It is recommended that you allocate a unique ERPS IDa, starting from 1, to each ERP
ring.
NOTE
a: It is recommended that all NEs on an ERP ring have the same ERPS ID, which facilitates data
configuration and management.
l For a major ring, it is recommended that you plan the counterclockwise direction as the
main direction of service transmission. For a ring node on the major ring, the port that
transmits services is an east port, and the port that receives services is a west port.
l For a sub-ring, it is recommended that you plan the counterclockwise direction as the
main direction of service transmission. For a ring node on the sub-ring, the port that
transmits services is an east port, and the port that receives services is a west port. A sub-
ring has only one ring port (east or west) on an interconnection node.
l A major ring or sub-ring can have only one RPL owner node. A ring node adjacent to the
RPL owner node is the RPL neighbor node. It is recommended that you configure the
east port on the RPL owner node as the RPL port, configure the west port on the RPL
neighbor node as the RPL neighbor port, and configure the east port on RPL owner
node's upstream node and the west port on the RPL neighbor node's downstream node as
RPL next neighbor ports.
NOTE
l It is not recommended that you plan a service convergence node as the RPL owner node or RPL
neighbor node, because the west and east ports on a service convergence node must receive and
transmit services in normal situations.
l It is not recommended that you plan an interconnection node as the RPL owner node for a sub-ring,
because a sub-ring has only one ring port on an interconnection node.
l Plan control VLAN IDs for R-APS channels on both major rings and sub-rings. Control
VLAN IDs must be different from the VLAN IDs of Ethernet services. All ring nodes on
an ERP ring must use the same control VLAN ID. It is recommended that you use the
same control VLAN ID for R-APS channels on all ERP rings of a ring network.
l Plan the ERPS reversion mode as required. It is recommended that you retain the default
value for the ERPS reversion mode.
– The control VLAN ID of an R-APS virtual channel must be different from the
VLAN IDs of the services carried by the same ERP ring.
– The control VLAN ID of an R-APS virtual channel must be different from the
control VLAN ID of the R-APS channel on the same ERP ring.
– A VLAN switching table for R-APS virtual channels must be configured on the two
interconnection nodes shared by an ERP ring and a sub-ring.
– The ERPS switching time on a sub-ring may take a long time because R-APS
packets need to travel through a long R-APS virtual channel.
3.3.1 Introduction
This section introduces link aggregation group (LAG).
In manual aggregation, a LAG is manually created, and the Link Aggregation Control
Protocol (LACP) is not enabled. A port can be in the Up or Down state. The system
determines whether to aggregate ports according to their states (Up or Down), working
modes, and rates.
In static aggregation, a LAG is manually created, and the LACP protocol is enabled. By
running LACP, a LAG can determine the state of each member port. A member port can be in
the selected, standby, or unselected state. Compared with manual aggregation, static
aggregation controls link aggregation more accurately and effectively.
In load-sharing mode, each member link in a LAG carries traffic based on the load balancing
algorithm, and the link bandwidth increases. When members in the LAG change or some
links fail, traffic is reallocated automatically.
In non-load sharing mode, only one member link in a LAG functions as the active link and
carries traffic, and the other links are in the standby state. When the active link fails, the
system selects a standby link to take over. This is equivalent to a hot standby mechanism.
The master port is a logical port and participates in service configuration on behalf of the
entire LAG. A LAG has only one master port. The master port cannot quit the LAG unless the
LAG is deleted. If the LAG is deleted, its services are still carried by the master port.
Standby ports cannot participate in service configuration and can be added to or deleted from
a LAG.
3.3.2 E-LAG
When switching occurs on the NEs in a 1+1 HSB, an enhanced link aggregation group (E-
LAG) is required to implement switching for active and standby GE access links (HSB is
short for hot standby).
Definition
E-LAG is a mechanism that implements multi-chassis link aggregation using the Link
Aggregation Control Protocol (LACP). It enhances Ethernet link reliability from the port level
to the equipment level.
Two OptiX RTN 380A/380AXs form a 1+1 HSB. A static link aggregation group (LAG) that
has only the master port is configured on each of the OptiX RTN 380A/380AXs. The master
and slave OptiX RTN 380A/380AXs exchange 1+1 HSB protection protocol packets so that
the LAGs on them form a multi-chassis E-LAG. A static, non-load sharing, and non-revertive
LAG must be configured on the IDU (or UNI equipment) connected to the OptiX RTN 380A/
380AXs. This LAG works with the 1+1 HSB to implement switching for the active and
standby GE access links.
Principles
NOTE
The following describes the E-LAG principles at the transmit end. The E-LAG principles at the receive end
are similar.
1. Before E-LAG switching
NE 1 is the master NE in the 1+1 HSB. In normal cases, the 1+1 HSB protection
protocol sets the highest LAG system priority on NE 1 and a lower LAG system priority
on NE 2. Manually set the LAG system priority on the IDU to be much lower than the
LAG system priorities set on NE 1 and NE 2 (it is recommended that the value of the
LAG system priority on the IDU be greater than 1000). According to the LACP
negotiation results, the link between NE 1 and the IDU is in the Selected state, and the
link between NE 2 and the IDU is in the Unselected state. As a result, the IDU transmits
services only to NE 1.
2. E-LAG switching
When switching occurs on NE 1 or NE 2, they exchange the LAG system priorities, and
the 1+1 HSB protection protocol sets the highest LAG system priority on NE 2.
According to the LACP renegotiation results, the link between NE 1 and the IDU is in
the Unselected state, and the link between NE 2 and the IDU is in the Selected state. As a
result, the IDU transmits services only to NE 2.
3.3.4 Specifications
This section lists the link aggregation group (LAG) specifications that OptiX RTN 380A/
380AX supports.
Item Specifications
Self-limitations
Item Description
LAG member ports l A GE optical port and a GE electrical port can form a
LAG.
l A microwave port and a GE port can form a LAG. There
are some restrictions on the LAG:
– The microwave port must function as the master port,
and the GE port must function as the slave port. The
slave port must work in 1000M full-duplex mode or
auto-negotiation mode.
– After the LAG is configured, the working mode or
the maximum frame length of the slave port cannot
be modified.
Transparent LACP packet l Only one service can be created on a physical port to
transmission transparently transmit LACP packets.
l E-Line and E-LAN services with transparent LACP
packet transmission enabled can carry only LACP
packets.
Table 3-9 Dependencies and Limitations Between LAG and Other Features
Feature Description
l PLA
l Use the same aggregation type at both ends. Static aggregation is recommended.
l Use the same load-sharing mode at both ends. The non-load sharing mode is appropriate
if a LAG is configured for protection, and the load-sharing mode is appropriate if a LAG
is configured to increase bandwidth.
l OptiX RTN 380A/380AX supports load-sharing algorithms based on media access
control (MAC) addresses (source MAC addresses, destination MAC addresses, and
source MAC addresses plus destination MAC addresses) and load-sharing algorithms
based on IP addresses (source IP addresses, destination IP addresses, and source IP
addresses plus destination IP addresses, MPLS label). Note the following when selecting
an algorithm:
– For a load-sharing LAG, the auto-sensing algorithm is recommended.
– If a LAG transmits Ethernet packets containing IP packets, the LAG uses the load-
sharing algorithm based on IP addresses.
If a LAG transmits Ethernet packets containing no IP packets, the LAG uses the
load-sharing algorithm based on source MAC addresses.
– For OptiX RTN 380A/380AX, a load-sharing algorithm takes effect at the NE level.
l It is recommended that you set the master and slave ports consistently for the equipment
at both ends.
l It is recommended that the system priority of a LAG take the default value. The system
priority is valid only when the LAG is in static aggregation mode.
l When LACP packets pass through an intermediate network, it is recommended to set
Packet Receive Timeout Period to Short period. In other scenarios, set it to Long
period to prevent unnecessary switchovers.
3.4 QoS
This section describes quality of service (QoS). QoS provides different levels of service
quality in certain aspects of services as required, such as bandwidth, delay, jitter, and packet
loss ratio. This ensures that the request and response of a user or application reaches an
expected quality level.
3.4.1 Introduction
This section introduces quality of service (QoS).
Definition
QoS provides different levels of service quality in certain aspects of services as required, such
as bandwidth, delay, jitter, and packet loss ratio.
QoS processing
The following figure illustrates how QoS is performed on Ethernet services.
l In the ingress direction, QoS performs traffic classification and monitoring for incoming
flows.
l In the egress direction, QoS performs congestion avoidance, traffic shaping, and queue
scheduling for outgoing flows.
QoS Model
The following figure shows QoS technologies applicable to each QoS application point in the
QoS model for Native Ethernet services.
In the egress direction, OptiX RTN 380A/380AX modifies the CoS information carried by
packets based on the mapping between the PHB and the trusted CoS.
The following figure shows the default mappings from PHBs to priorities of egress packets.
Simple traffic classification maps packets carrying different CoSs to specific PHBs.
3.4.1.3 CAR
Committed access rate (CAR) is a traffic policing technology. CAR assigns a high priority to
traffic that does not exceed the rate limit and drops or downgrades traffic that exceeds the rate
limit.
Application
The following figure shows how traffic changes after CAR processing.
Red packets are directly dropped. Green packets and yellow packets pass traffic policing, and
yellow packets are re-marked.
Principles
The following CAR operations are performed for traffic policing:
l Packets are classified based on preset matching rules. If packets do not match any rules,
they are transmitted directly. If packets match some rules, they are placed into the token
bucket for further processing.
l The CAR uses the dual token bucket three color marker algorithm.
l The dual token bucket three color marker algorithm uses two token buckets Tc and Tp
and marks packets according to the situations when packets pass the token buckets.
l Tokens are placed into the Tp token bucket at the PIR, and the capacity of the Tp token
bucket is equal to the PBS.
l Tokens are placed into the Tc token bucket at the CIR, and the capacity of the Tc token
bucket is equal to the CBS.
l If a packet obtains the Tc token, this packet is marked green. Green packets pass traffic
policing.
l If a packet obtains the Tp token but does not obtain the Tc token, this packet is marked
yellow.
l If a packet does not obtain the Tp token, this packet is marked red. This type of packets
is directly discarded.
l Yellow packets pass traffic policing but different operations are performed on them as
specified. These operations include:
– Discard: discarding packets
– Pass: forwarding packets
– Re-mark: re-marking packets
If packets are re-marked, they are mapped to a queue with a specified priority and
then forwarded.
OptiX RTN 380A/380AX supports two congestion avoidance algorithms: tail drop and
weighted random early detection (WRED).
Tail Drop
With tail drop enabled, all newly arriving packets are dropped if the buffer queue is filled to
its maximum capacity.
WRED
With WRED enabled, yellow and red packets are preferentially dropped and green packets are
always transmitted first in the case of network congestion.
l At T0, if Buffer queue length exceeds Red Min. Th., there is a possibility that red
packets waiting to enter a queue are discarded at random.
l At T1, if Buffer queue length exceeds Yellow Min. Th., there is a possibility that
yellow and red packets waiting to enter the queue are discarded at random.
l At T2, if Buffer queue length exceeds Green Min. Th., there is a possibility that green,
yellow, and red packets waiting to enter the queue are discarded at random.
l At T3, if Buffer queue length exceeds Red Max. Th., all red packets waiting to enter
the queue are discarded, and there is a possibility that green and yellow packets waiting
to enter the queue are discarded at random.
l At T4, if Buffer queue length exceeds Yellow Max. Th., all red and yellow packets
waiting to enter the queue are discarded, and there is a possibility that green packets
waiting to enter the queue are discarded at random.
l At T5, if Buffer queue length exceeds Green Max. Th., all packets waiting to enter the
queue are discarded.
SP
During SP scheduling, packets are transmitted in descending order of queue priorities. Packets
in a lower-priority queue can be transmitted only after a higher-priority queue becomes empty.
Therefore, important services are placed in higher-priority queues and are transmitted with
precedence over unimportant services.
SP scheduling uses all resources to ensure the quality of service (QoS) of higher-priority
services. If there are always packets in higher-priority queues, packets in lower-priority
queues will never be transmitted.
WRR
WRR allocates a weight to each queue and a service time segment to each queue based on the
weight. Packets in a WRR queue are transmitted at the allocated service time segment. If the
queue does not have packets, packets in the next queue are transmitted immediately.
Therefore, if a link is congested, WRR allocates bandwidth based on the weights of queues.
Unlike SP, WRR schedules packets in every queue based on weights, so even packets in
lower-priority queues have a chance to be transmitted.
SP+WRR
The SP+WRR algorithm ensures the precedence of higher-priority services (for example,
voice services) and assigns time segments to transmit lower-priority services.
l If CS7, CS6, and EF queues, which have higher priorities than WRR queues, have
packets, packets in the CS7, CS6, and EF queues are transmitted using SP whereas
packets in the WRR queues are not transmitted.
l If the CS7, CS6, and EF queues have no packets, packets in the WRR queues (AF4,
AF3, AF2, and AF1) are transmitted using WRR.
l If both WRR queues and CS7, CS6, and EF queues have no packets, packets in the
lower-priority queue (BE) are transmitted using SP.
Traffic shaping is implemented using the single token bucket two color marker algorithm.
Tokens are placed in the token bucket at the peak information rate (PIR). The capacity of the
token bucket is equal to the peak burst size (PBS).
3.4.3 Specifications
This section lists the quality of service (QoS) specifications that OptiX RTN 380A/380AX
supports.
DiffServ Maximum 4
number of
DiffServ (DS)
domains
Item Specifications
Item Specifications
Per-hop l CS7
behaviors l CS6
(PHBs)
l EF
l AF4 (AF41, AF42, and AF43)
l AF3 (AF31, AF32, and AF33)
l AF2 (AF21, AF22, and AF23)
l AF1 (AF11, AF12, and AF13)
l BE
NOTE
l Packets mapped to the AF11, AF21, AF31, and AF41 queues are
green by default.
l Packets mapped to the AF12, AF22, AF32, and AF42 queues are
yellow by default.
l Packets mapped to the AF13, AF23, AF33, and AF43 queues are
red by default.
NOTE
This parameter allows you to change the PHB class of untrusted
packets from the default BE to another value. Untrusted packets refer
to packets that cannot be mapped to a specific PHB class according
to the DS mapping.
Enabling/ Supported
Disabling of
PHB
demapping
Item Specifications
Congesti Tail drop Both microwave ports and Ethernet ports support tail drop.
on
avoidanc WRED Both microwave ports and Ethernet ports support WRED.
e
Queue Maximum 8
scheduli number of
ng egress queues
Weight When WRR is applied to the AF4, AF3, AF2, and AF1
allocation of queues, the default weight (25%) of each AF queue is
WRR changeable.
Protocol type
Self-limitations
WRR At each port of the OptiX RTN 380A/380AX, WRR queues must be
consecutive. That is, WRR queues and SP queues cannot interleave.
CAR When configuring the CAR remarking function, you need to specify the PHB
class to map the to-be-remarked packets. Only yellow packets can be remarked,
and only to green.
When creating port-based CAR, create a matching rule for a PORT+C/SVLAN-
based flow (if the VLAN ID is 0, CAR applies to packets carrying any VLAN
ID regardless of the wildcard value) and apply CAR.
Tail drop The tail drop threshold ranges from 0 to 131072 (unit: 256 bytes). A value
ranging from 2500 to 20000 (unit: 256 bytes) is recommended.
l If you set the tail drop threshold to a value less than 2500 or greater than
20000 (unit: 256 bytes), the accuracy of the SP/WRR scheduling algorithms
cannot meet requirements.
l The tail drop threshold cannot be set to 0. Otherwise, services will be
interrupted.
WRED In the WRED policy (in percentage), the queue length ranges from 38 to
131072 (unit: 256 bytes). A value ranging from 2500 to 20000 (unit: 256 bytes)
is recommended. If you set the tail drop threshold to a value less than 2500 or
greater than 20000 (unit: 256 bytes), the accuracy of the SP/WRR scheduling
algorithms cannot meet requirements.
Table 3-13 Dependencies and Limitations Between QoS and Other Features
Feature Description
Feature Description
Data l High bandwidth (a NodeB may l Bandwidths are not converged for
servic require up to 20 Mbit/s bandwidth) data services at the terminal access
e l Diverse services with different QoS layer but reserved at the
requirements convergence layer based on the
convergence ratio.
l Low delay, low jitter, and low
packet loss ratio for real-time l Different services are tagged with
services, such as video phone and different priorities on NodeBs and
online game services RNCs. Data services have a lower
priority than voice services.
l Statistical multiplexing for non-real-
time services such as Internet l Traffic policing is performed at the
accessing services, allowing a high ingress port of OptiX RTN 380A/
convergence ratio 380AX connected to NodeBs.
l A mobile backhaul network
consisting of OptiX RTN 380A/
380AXs ensures high-priority
service scheduling. It is
recommended that data services be
mapped to the AF1, AF2, AF3, or
AF4 queue.
l Select simple traffic classification using DS or complex traffic classification based on the
trusted CoS.
l Configure DS based on the mapping between service priorities and PHBs. If wireless
network engineers have not yet worked out the mapping, liaise with them to determine
the mapping.
– CS6 and CS7 queues always have higher priorities, and the packets in these two
queues are always scheduled first. It is recommended that these queues be used for
control packets and management packets, which require the highest scheduling
priority and very low bandwidth.
– Do not place services that require high bandwidth and are insensitive to delay in
high-priority strict priority (SP) queues, such as EF. Otherwise, high-priority SP
queues will occupy all port bandwidth. It is recommended that voice services be
placed in the EF queue.
– It is recommended that data services be placed in AF1, AF2, AF3, and AF4 queues
using the weighted round robin (WRR) algorithm. The scheduling weights
determine the proportion of bandwidth allocated to each queue.
l If services traverse a third-party network, ensure that the third-party network provides a
bandwidth that is higher than or equal to the total bandwidth to be guaranteed.
l If the OptiX RTN 380A/380AX network provides a bandwidth lower than the total
bandwidth to be guaranteed, expand the network capacity.
l To restrict the bandwidth of services entering the RTN network based on the service
type, specify the rate limits at ingress ports for flows that are created in complex traffic
classification.
l To restrict the bandwidth of services based on PHBs (queues), perform shaping for port
queues.
l If a leased third-party network provides a bandwidth lower than the Ethernet port
bandwidth on its connected OptiX RTN 380A/380AX, perform shaping at the Ethernet
port so that the egress bandwidth of the OptiX RTN 380A/380AX matches the
bandwidth of the third-party network.
l To better share the air-interface link bandwidth, do not perform shaping for microwave
ports on OptiX RTN 380A/380AX unless necessary.
If low-priority services require a guaranteed minimum bandwidth, perform shaping for port
queues of high-priority services, or configure an appropriate queue scheduling policy.
To avoid congestion, it is recommended that you configure weighted random early detection
(WRED) for microwave ports on OptiX RTN 380A/380AX. WRED ensures the transmission
of high-priority services.
3.5 HQoS
Hierarchical quality of service (HQoS) offers a multi-level queue scheduling mechanism for
the DiffServ (DS) model to guarantee bandwidth for multiple services of different users.
3.5.1 Introduction
This section defines quality of service (HQoS) and describes its application and model.
Definition
HQoS is a technology used to guarantee the bandwidth of multiple services of many
subscribers in the differentiated service (DiffServ) model through a queue scheduling
mechanism.
Purpose
The traditional DiffServ QoS technology schedules services based on ports. However, a single
port differentiates service priorities but does not differentiate subscribers. If the traffic data
from different subscribers have the same priority and the traffic data enter the same port
queue, these traffic data compete for the same queue resources and the service quality of all
subscribers cannot be guaranteed.
In the HQoS technology recommended by TR-059 on the DSL Forum, data flows are
classified into subscriber queues and service queues. The bandwidth and priority scheduling
of subscriber data and service data are ensured separately through hierarchical scheduling
technology. Therefore, the HQoS technology prevents different subscriber data and service
data from preempting bandwidths.
NodeB
Voice
Video
FE/GE
Internet
RNC
eNodeB 1
Voice Regional Packet
Video Network
Internet
FE/GE
eNodeB 2 aGW
Voice
Video
Internet HQoS
BTS GSM
CIR=20M CIR=20M
Voice EF PIR=20M PIR=20M
As shown in Figure 3-20, the HQoS technology schedules Ethernet services that the OptiX
RTN 380A/380AX transmits between five levels, finely controlling the service quality of
different subscriber data and service data.
l Level 5: subdivides the services of a subscriber into voice, video, Internet traffic, and
others. Controls the bandwidth of each service type of the subscriber.
l Level 4: identifies each subscriber and controls the bandwidth of each subscriber.
l Level 3: identifies each subscriber group and controls the bandwidth of each subscriber
group. (For example, the subscribers using different types of base stations can form
different subscriber groups.)
l Level 2: limits the rate of each queue at an egress port.
l Level 1: limits the rate of each egress port.
Limit the
HQoS Apply the V- Apply the port
Apply the Apply the bandwidth Apply the DS
configuration UNI egress policy
DS domain port policy for the V- domain
policy
UNI group
Queue
ACL scheduling DS mapping
HQoS in the egress
technologies Congestion direction
CAR Avoidance
DS mapping
in the ingress CoS Traffic Traffic Traffic Traffic
direction shaping shaping shaping shaping
Ethernet CS7
packets of CS6
user A Complex EF
Simple AF4
traffic traffic AF3
V-UNI
classification classification AF2
AF1
BE
V-UNI
group
Ethernet
CS7
packets of CS6 Ethernet
user B Simple Complex EF packets
traffic traffic AF4 V-UNI
classification classification AF3
AF2
AF1
BE
Ethernet
CS7
packets of CS6
user C Simple Complex EF
traffic traffic AF4 V-UNI V-UNI
classification classification AF3 group
AF2
AF1
BE
Figure 3-22 HQoS model for QinQ link-carried Native Ethernet services (UNI to NNI)
UNI NNI
HQoS Apply the Apply the Apply the Apply the DS Apply the port
configuration DS domain port policy QinQ policy domain policy
Queue
DS mapping ACL scheduling DS mapping
HQoS in the ingress in the egress
technologies direction Congestion
CAR Avoidance direction
Traffic Traffic
CoS shaping Traffic
shaping
shaping
Ethernet CS7
packets of CS6
user A EF
Simple Complex AF4
traffic traffic QinQ
AF3
classification classification AF2
AF1
BE
Ethernet CS7
packets of CS6 QinQ
user B Simple Complex
EF packets
AF4 QinQ
traffic traffic AF3
classification classification AF2
AF1
BE
Ethernet CS7
packets of CS6
user C EF
Simple Complex AF4
traffic traffic QinQ
AF3
classification classification AF2
AF1
BE
Figure 3-23 HQoS model for QinQ link-carried Native Ethernet services (NNI to UNI)
NNI UNI
QinQ CS7
packets of CS6
user C EF
Simple Complex AF4 V-UNI
traffic traffic V-UNI
AF3 group
classification classification AF2
AF1
BE
Figure 3-24 HQoS model for ETH PWE3 services (ingress node)
UNI NNI
HQoS
Ingress port PW Tunnel Egress port
application point
DS mapping Queue
in the ingress ACL scheduling DS mapping
HQoS in the egress
direction Congestion
technologies CAR direction
Avoidance
CoS Traffic Traffic Traffic
shaping shaping shaping Traffic
Ethernet shaping
CS7
packets of CS6
user A Complex EF
Simple traffic traffic AF4 PW
classification AF3
classification AF2
AF1
BE
Tunnel
Ethernet
CS7 MPLS
packets of CS6 packets
user B Complex EF
Simple traffic AF4
classification traffic AF3
PW
classification AF2
AF1
BE
Ethernet
CS7
packets of CS6
user C Complex EF
Simple traffic AF4
classification traffic PW Tunnel
classification AF3
AF2
AF1
BE
Figure 3-25 HQoS model for ETH PWE3 services (transit node)
NNI NNI
QoS
application point Ingress port Egress port
DS mapping DS mapping
QoS in the ingress in the egress
technologies direction direction
Traffic
shaping
MPLS
packets MPLS
Simple traffic packets
classification
Figure 3-26 HQoS model for ETH PWE3 services (egress node)
NNI UNI
HQoS
application point Ingress port V-UNI V-UNI group Egress port
Limit the
HQoS Apply the V- Apply the port
Apply the bandwidth Apply the DS
configuration UNI egress
DS domain for the V- domain policy
policy
UNI group
CS7
MPLS CS6 Ethernet
packets EF packets
Traffic AF4 V-UNI
V-UNI group
classification AF3
AF2
AF1
BE
3.5.2 Principles
This section describes the hierarchical scheduling model in the HQoS technology.
In the HQoS technology, subscribers and services are classified to queues with different
priorities on the subscriber access side and carrier network side for scheduling. These queues
include the flow queue (FQ), subscriber queue (SQ), subscriber group queue (GQ), class
queue (CQ), and target port (TP) arranged in ascending order of granularities. This method
precisely control the bandwidth and priority of various services of many subscribers on the
subscriber access side and carrier network side.
NOTE
In the HQoS technology that the OptiX RTN 380A/380AX supports, FQs correspond to service flows (for
example, voice and video flows) of a subscriber; SQs are subscriber services, and one subscriber correspond
to one VUNI port or QinQ link; GQs are subscriber groups, and one subscriber group corresponds to one
VUNI group.
FQ
An FQ buffers the data flow with a certain priority for a subscriber. The maximum bandwidth
of an FQ is limited by shaping. Each subscriber data flow can be divided into eight priorities.
That is, each subscriber can use a maximum of eight FQs. A FQ cannot be shared by different
subscribers.
FQ attributes include:
SQ
Each SQ represents a subscriber (for example, a VLAN). The CIR and PIR can be configured
for an SQ.
l Each SQ includes eight FQs that share the SQ bandwidth. If some FQs do not transmit
services, the other FQs can use the bandwidth not in use.
l An SQ can schedule the eight FQs it contains, each of which supports setting of SP or
WRR.
– By default, FQs with priorities BE, EF, CS6, and CS7 use the SP scheduling
algorithm.
– By default, FQs with priorities AF1, AF2, AF3, and AF4 use the WRR scheduling
algorithm. The default weights of these queues are 1:1:1:1.
GQ (Group Queue)
Multiple subscribers can be mapped into a GQ. For example, all SQs that share the same
bandwidth or all Gold-level SQs can be mapped into a GQ. A GQ can bind multiple SQs, but
a SQ can be mapped into only one GQ.
1. The DRR algorithm is used to schedule the traffic lower than the CIR for SQs.
2. If there is still remaining bandwidth, the DRR algorithm is used to schedule the traffic
higher than the CIR but lower than the PIR (that is, the traffic at the EIR). The SP
algorithm is used to schedule the traffic higher than the CIR but lower than the EIR. The
traffic at the CIR is always first guaranteed and the traffic higher than the PIR is
discarded. If a GQ obtains the PIR, each SQ in the GQ is guaranteed to obtain the CIR or
even the PIR.
CQ
A port has eight priority queues, which are called CQs.
CQ attributes include:
Port
Each port contains eight CQs, and SP+WRR algorithm is used to schedule the traffic between
CQs. Setting the PIR to limit the traffic rate at a port is allowed.
Example
For your better understanding, one example is provided to explain the FQ, SQ, GQ, and their
relationships.
Assume that there are 20 families in a building, and each family purchases 20 Mbit/s
bandwidth. Therefore, one SQ is created for each family, with the CIR and PIR being set to 20
Mbit/s. After VoIP and IPTV services are deployed, operators provide a new bandwidth
package (20 Mbit/s) to meet these families' requirements for Internet access, VoIP, and IPTV
services.
The HQoS is configured as follows:
l Three FQs are configured, corresponding to three service types VoIP, IPTV, and HSI.
l Twenty SQs are configured, corresponding to 20 families. The CIR and PIR are
configured for each SQ, with the CIR guaranteeing a bandwidth and the PIR limiting the
maximum bandwidth.
l A GQ is configured for the entire building to aggregate bandwidth of the 20 subscribers,
which can be considered as a subscriber group. The total bandwidth of the 20 subscribers
is the PIR of the GQ. Therefore, the 20 subscribers are independent of each other but
their total bandwidth is limited by the PIR of the GQ.
The hierarchical model functions as follows:
l FQs classify services and control service types of subscribers and bandwidth allocation
to various services.
l SQs limit the traffic rate on a per-subscriber basis.
l GQs limit the rate of the 20 SQs based on a subscriber group.
3.5.4 Specifications
This section lists the hierarchical quality of service (HQoS) specifications that this product
supports.
Table 3-15 lists the HQoS specifications that this product supports.
NOTE
Feature Updates
Version Description for RTN 380A Description for RTN 380AX
Self-limitations
NodeB
RNC
aGW
eNodeB
HQoS HQoS
Level 5: FQ
Level 4: SQ Level 5: FQ
Level 3: GQ
Level 4: SQ
Level 2: CQ Level 3: GQ
Level 1: Port Level 2: CQ
Level 1: Port
GQ (by user group) One user group corresponds to one V-UNI group.
l V-UNI-based rate limit: the CIR value must be equal to or
higher than the total CIR values of all V-UNIs in the V-UNI
group, and the PIR value must be equal to or higher than the
PIR value of any V-UNI in the V-UNI group.
HQoS
Port
Voice EF CIR=10M PIR=10M
Customer 2 Customer 2
Video AF1 CIR=20M CIR=60M CIR=60M
PIR=100M PIR=100M
Internet AF2 CIR=30M
Service Services from different users are carried over different QinQ links.
3.6.1 Introduction
This section introduces ETH OAM.
Application
l Ethernet service OAM focuses on end-to-end maintenance of Ethernet links. Based on
services, Ethernet service OAM manages each network segment that a service traverses.
l Ethernet port OAM focuses on point-to-point maintenance of Ethernet links between two
directly connected devices in the last mile. Ethernet port OAM, independent of services,
performs OAM automatic discovery, link performance monitoring, remote loopback
detection, and local loopback detection to maintain a point-to-point Ethernet link.
Remote loopback The OAM entity at the local In a remote loopback, the
end transmits the loopback initiator transmits and
control OAM PDU to the receives a number of
remote OAM entity to packets. By comparing the
request a loopback. The two numbers, you can check
loopback data is analyzed the bidirectional
for fault locating and link performance of the link
performance testing. between the initiator and the
responder.
3.6.3 Specifications
This section provides the ETH OAM specifications that OptiX RTN 380A/380AX supports.
Table 3-21 ETH OAM specifications that OptiX RTN 380A/380AX supports
Item Specifications
OAM operation CC
LB
LT
AIS activation
LM
DM
Item Specifications
Self-limitations
Item Description
MEP and remote MEP An MEP responds only to OAM operations initiated by
other MEPs belonging to the same MA. For the OptiX RTN
380A/380AX, users need to configure an MEP that will
initiate OAM operations as a remote MEP.
3.7.1 Introduction
This section defines bandwidth notification and describes its purpose.
Definition
Bandwidth notification enables the OptiX RTN 380A/380AX to monitor air-interface
bandwidth and send ITU-T Y.1731-compliant packets carrying bandwidth information to
interconnected routers. The routers perform QoS processing according to air-interface
bandwidth changes.
Purpose
If adaptive modulation (AM) is enabled for a microwave link, the air-interface bandwidth of
the link dynamically changes according to modulation scheme shifts. In the scenario in which
the OptiX RTN 380A/380AX interconnects with a router and functions as a Layer 2 device to
transparently transmit services, if the air-interface bandwidth decreases but the router
transmits services at the original rate, packet loss may occur due to insufficient link
bandwidth. Bandwidth notification can be enabled on the OptiX RTN 380A/380AX so that
the peer router can be informed of dynamic air-interface bandwidth changes and distributes/
grooms traffic based on the changes.
Packet Format
The transmission of bandwidth notification packets can be periodical or triggered by a
bandwidth change. A bandwidth notification packet includes information elements such as the
MEG level, air-interface bandwidth, and transmission interval. The bandwidth notification
packets are transmitted reserve to the direction in which continuity check messages (CCMs)
are transmitted.
Networking Scenarios
Bandwidth notification scenarios supported by OptiX RTN 380A/380AX are described as
follows:
l Single Hop, Single Microwave Link
As shown in Figure 3-32, a single microwave link exists between NE 1 and NE 2; NE 2
interconnects with a router through an Ethernet port.
a. The microwave port on NE 2 detects that the air-interface bandwidth is B1.
b. NE 2 transmits a bandwidth notification packet (BN-1) carrying the bandwidth
information to the router.
c. The router determines whether to perform QoS processing according to the
bandwidth information carried in the BN-1 packet.
3.7.2 Principles
The OptiX RTN 380A/380AX monitors changes in the air-interface bandwidth of microwave
links and informs interconnected routers of the bandwidth changes through ITU-T Y.1731-
compliant packets.
As shown in Figure 3-35, egress MEPs MEP1 and MEP2 are configured on the OptiX RTN
380A/380AX's (NE 2) microwave port and the peer router, respectively. Bandwidth
notification is enabled on the microwave port. The following describes how bandwidth
notification is implemented:
ITU-T Y.1731: OAM functions and mechanisms for Ethernet based networks
3.7.4 Specifications
This section lists the bandwidth notification specifications that the OptiX RTN 380A/380AX
supports.
Table 3-24 Bandwidth notification specifications that the OptiX RTN 380A/380AX supports
Item Specifications
Item Specifications
MEP type Egress MEP. Only an egress MEP can initiate bandwidth
notification.
Feature Updates
Version Description for RTN 380A Description for RTN 380AX
Self-limitations
MEP l Only an egress maintenance end point (MEP) can initiate bandwidth
notification. A maintenance entity group intermediate point (MIP)
transparently transmits bandwidth notification packets.
l An MEP is specific to a service, whereas bandwidth notification
functions by port or protection group. If a physical link transmits
multiple services, only the MEP on one service can be enabled with
bandwidth notification.
l The OptiX RTN 380A/380AX's Client MEP Level and VLAN ID
must be the same as those on the peer router.
Item Description
Warm reset If bandwidth notification packets are periodically transmitted, and the
OptiX RTN 380A/380AX does not transmit bandwidth notification
packets to the peer router within 3.5 consecutive periods, the router will
not perform rate limitation according to the air-interface bandwidth.
Table 3-26 Dependencies and limitations between bandwidth notification and other features
Feature Description
1+1/non-load l The OptiX RTN 380A/380AX sums up the total bandwidth of all
sharing member links and notifies the peer router of the total bandwidth. Only
LAG/PLA an MEP is required on the main or master port in an group.
l Microwave ports support only non-load sharing LAG but do not
support load sharing LAG.
ETH OAM Bandwidth notification packets comply with ITU-T Y.1731. Bandwidth
notification is not affected by the ETH OAM protocol (IEEE 802.1ag or
ITU-T Y.1731) type and is independent of connectivity check results.
4 MPLS Features
4.1.1 Introduction
This section provides the definition of MPLS and describes its purpose.
Definition
Based on IP routes and control protocols, MPLS is a connection-oriented switching
technology for the network layer. MPLS uses short and fixed-length labels at different link
layers for packet encapsulation, and switches packets based on the encapsulated labels.
MPLS has two planes: control plane and forwarding plane. The control plane is
connectionless, featuring powerful and flexible routing functions to meet network
requirements for a variety of new applications. This plane is mainly responsible for label
distribution, setup of label forwarding tables, and setup and removal of label switched paths
(LSPs). The forwarding plane is also called the data plane. It is connection-oriented and
supports Layer 2 networks such as Ethernet networks. The forwarding plane adds or deletes
IP packet labels, and forwards the packets according to the label forwarding table.
Purpose
In the packet domain, MPLS helps to set up MPLS tunnels to carry PWs that transmit a
variety of services on a PSN in an end-to-end manner. These services only include Ethernet
services. Figure 4-1 shows the typical MPLS application in the packet domain. In the figure,
the services between the NodeBs and RNCs are transmitted by PW1 and PW2 carried by the
MPLS tunnel.
On an MPLS network, each LSR has a unique identifier; that is, a 16-byte LSR ID. An LSR
ID can be based on the IPv4 address or IPv6 address.
NOTE
Currently, the OptiX RTN 380A/380AX supports only LSR IDs based on the IPv4 address.
4.1.1.3 LSP
On an MPLS network, LSRs adopt the same label switching mechanism to forward packets
that have the same characteristics. The packets with the same characteristics are called a
forwarding equivalence class (FEC). The path along which an FEC travels through the MPLS
network is called the LSP.
Based on relative positions of LSRs on an LSP, neighboring LSRs are called upstream and
downstream LSRs. As shown in Figure 4-4, the downstream of LSR A is LSR B; the
upstream of LSR B is LSR A, the downstream of LSR B is LSR C; the upstream of LSR C is
LSR B, and the downstream of LSR C is LSR D; and the upstream of LSR D is LSR C.
LSP Types
LSPs are classified into various types depending on different classification criteria. For
details, see Table 4-1.
NOTE
The OptiX RTN 300 does not support the static LSP setup mode or the uniform LSP mode.
l Destination address: It is the MAC address of the opposite port learned using the
Address Resolution Protocol (ARP).
l Source address: It always takes the MAC address of the port.
l 802.1q header: The OptiX RTN 380A/380AX determines whether an Ethernet frame at
an egress Ethernet port carries the 802.1q header, based on the TAG attribute of the port.
If the TAG attribute is Access, the Ethernet frame does not carry the 802.1q header. If the
TAG attribute is Tag aware, the VLAN ID in the 802.1q header of an MPLS packet is
the tunnel VLAN ID that is set on the NMS. If the tunnel VLAN ID is absent, the VLAN
ID in the 802.1q header is the default VLAN ID (that is, 1) at the NNI port that transmits
the MPLS packet.
l Length/Type: It has a fixed value of 0x8847. After finding that Length/Type in a packet
is 0x8847, the OptiX RTN 380A/380AX considers that the packet is an Ethernet frame
carrying an MPLS packet. An NE does not check Length/Type in MPLS packets at
ingress ports based on the TAG attribute and the VID of the label switched path (LSP).
l MPLS packet: It consists of the MPLS label and Layer 3 user packet. For details on its
format, see 4.1.1.5 MPLS Label.
l Frame check sequence (FCS): It is used to check whether the Ethernet frame is correct.
NOTE
ARP: It is used to translate an IP address (logical address) at the network layer into a MAC address
(physical address) at the data link layer. When the TAG attribute of a UNI port is Tag ware (default), an
ARP packet that is transmitted or received through an NNI port has a VLAN ID that is the default
VLAN ID of the NNI port. Therefore, the TAG attribute and default VLAN ID of an NNI port must be
the same as those of a peer NNI port, respectively.
FE, GE, and microwave ports all use Ethernet frames to bear MPLS packets.
Label Format
The OptiX RTN 380A/380AX uses Ethernet frames to bear MPLS packets. Figure 4-6 shows
the format of the MPLS label.
Label Stack
A label stack refers to an ordered set of labels. MPLS allows a packet to carry multiple labels.
The label next to the Layer 2 header is called the top label or outer label, and the label next to
the IP header is called the bottom label or inner label. Theoretically, an unlimited number of
MPLS labels can be stacked.
The label stack is organized as a Last In, First Out stack. The top label is always processed
first.
Label Space
The value range for label distribution is called a label space. Two types of label space are
available:
l Per-platform label space
An LSR uses one label space; that is, the labels are unique per LSR.
l Per-interface label space
Each interface on an LSR uses a label space; that is, the labels are unique per interface,
but can be repeated on different interfaces.
The OptiX RTN 380A/380AX supports only global label space. For an OptiX RTN 380A/
380AX NE, all ingress labels must be unique to each other and all egress labels also must be
unique to each other.
NOTE
l If the two LSPs in Figure 4-8 carry MPLS packets with the same source MAC address (system
MAC address) and are connected to the Layer 2 network through two ports and if the Layer 2
network uses a bridge to transmit packets, the two LSPs need to carry different VLAN IDs and the
Layer 2 network needs to use the IVL mode to prevent network flapping.
l When the VLAN subinterface function is enabled, the ARP packets sent to the next-hop MPLS node
carry the same VLAN ID as that carried by the LSPs and therefore can traverse the Layer 2 network.
4.1.2 Principles
On an MPLS network, LSRs enable the packets with the same characteristics to be
transmitted on one LSP based on a unified forwarding mechanism.
The ingress, transit, and egress nodes handle an MPLS packet as follows.
1. Receives a packet, and finds the LSP ID based on the FEC of the packet.
2. Finds the NHLFE based on the LSP ID and then obtains the information such as
outgoing interface, next hop, outgoing label, and operation. The label operation for an
ingress node is Push.
3. Pushes an MPLS label to the packet, and forwards the encapsulated MPLS packet to the
next hop.
1. Finds the LSP ID based on the label value of the MPLS packet received at the incoming
interface.
2. Finds the NHLFE based on the LSP ID and then obtains the information such as
outgoing interface, next hop, outgoing label, and operation. The label operation for a
transit node is Swap.
3. The outgoing label value of the NHLFE is 21. Thus, NE B replaces the old label value of
20 with a new label value of 21 and then sends the MPLS packet carrying the new label
to the next hop.
NOTE
If the value of the new label is equal to or greater than 16, the label operation is Swap. If the value of the
new label is less than 16, this label is special and needs to be processed according to the specific value of
the label.
101 - - - Pop
1. Finds the LSP ID based on the label value of the MPLS packet received at the incoming
interface.
2. Finds the NHLFE based on the LSP ID and then determines that the label operation is
Pop.
3. Pops the MPLS label and forwards the MPLS packet.
4.1.4 Specifications
This section provides the specifications of MPLS.
Item Specifications
Feature Updates
Version Description
Self-limitations
Item Description
Services carried on static Static MPLS tunnels carry ETH PWE3 services.
MPLS tunnels
Item Description
Table 4-7 Dependencies and limitations between MPLS and other features
Feature Description
Planning guidelines for Ethernet ports or IF_ETH ports functioning as MPLS ports are as
follows:
l The port mode must be set to Layer 3 for Ethernet ports or IF_ETH ports.
l The MTU preset for an Ethernet port must be greater than the maximum length of an
Ethernet frame that can be transmitted. It is recommended that you set the MTU to 1620.
l The TAG attribute for an Ethernet port or IF_ETH port is usually set to Tag Aware. After
the setting, tagged Ethernet frames bear MPLS packets and their VLAN IDs are the
default VLAN ID (1) set for the Ethernet port or IF_ETH port. If the opposite MPLS
equipment requires untagged Ethernet frames to bear MPSL packets, the TAG attribute
should be set to Access for an Ethernet port or IF_ETH port. In general cases, MPLS
equipment has no requirement for the type of Ethernet frames bearing MPLS packets.
Planning guidelines for VLAN sub-interfaces functioning as MPLS ports are as follows:
l The port mode must be set to Hybrid for an Ethernet port or IF_ETH port on which a
VLAN sub-interface is configured.
l The MTU preset for an Ethernet port on which a VLAN sub-interface is configured must
be greater than the maximum length of an Ethernet frame that can be transmitted. It is
recommended that you set the MTU to 1620.
l VLAN IDs of all VLAN sub-interfaces on a physical port must be different from the
VLAN IDs of native Ethernet services transmitted over the physical port.
l For a point-to-point MPLS link, plan 30-bit IP addresses for its MPLS ports if possible.
In this case, four host addresses are available on the network segment. Among the four
host addresses, there is a broadcast address and a network address. Allocate the
remaining two host addresses to the MPLS ports at both ends of the point-to-point MPLS
link. For a point-to-multipoint MPLS link, plan shorter IP addresses for MPLS ports
based on the number of MPLS links.
If you use the U2000 to configure MPLS tunnels and PWE3 services in end-to-end mode, you can use labels
that the U2000 automatically allocates and do not apply the following guidelines.
l MPLS labels and PW labels on an NE share label resources. Therefore, MPLS labels and
PW labels must be planned in a uniform manner.
l A bidirectional MPLS tunnel must be allocated two MPLS labels.
If services from NEs at the same layer are first converged to an NE at the same layer and then
transmitted to a higher-layer NE, the higher-layer NE is not included in its subordinate lower-
layer label subnet.
– The label range planned for a higher-layer subnet must cover the label ranges
planned for its subordinate lower-layer subnets.
– Each subordinate lower-layer subnet of a higher-layer subnet has a different label
range.
– Within a subnet, all hops of an MPLS link can use the same MPLS label or use
different MPLS labels. It is recommended that all hops of an MPLS link within a
subnet use the same MPLS label.
– A label space is shared within a subnet. This means that each PWE3 service or each
MPLS tunnel within a subnet use a different label.
– An MPLS tunnel can have a different label when it enters another subnet.
NOTE
If an MPLS tunnel is originated and terminated within the same subnet, the MPLS tunnel label
remains the same. If an MPLS tunnel is terminated in a higher-layer subnet, the MPLS tunnel
label also remains the same because the label range of the higher-layer subnet covers the label
range of the lower-layer subnet.
– For each subnet, a label range should be reserved for uncertain or special services
(for example, services traversing different lower-layer subnets).
4.1.8 FAQs
This section answers FAQs about MPLS tunnels based on static LSPs.
Answer: Does the OptiX RTN 300 support dynamic MPLS tunnels?
Answer: The OptiX RTN 300 does not support dynamic MPLS tunnels.
4.2.1 Introduction
This section describes basic information of MPLS-TP OAM.
Definition
The Internet Engineering Task Force (IETF) and International Telecommunication Union-
Telecommunication Standardization Sector (ITU-T) have defined MPLS-TP for how MPLS
applies to transmission of packet services on transport networks. MPLS-TP is compatible
with the existing MPLS standards.
MPLS-TP has the following features:
l MPLS can be deployed on existing transport networks, which are operated/maintained
using the existing transport technologies.
l Paths for transmitting packet services can be predicted.
MPLS-TP OAM is defined in MPLS-TP and was developed based on the following
techniques:
l Bidirectional forwarding detection (BFD)
l Techniques specified in ITU-T Y.1731
NOTE
This section covers only MPLS-TP OAM that was developed based on ITU-T Y.1731.
Purpose
ITU-T Y.1731-compliant MPLS-TP OAM applies to most data communication equipment
and packet switching equipment, and therefore can provide end-to-end OAM for PSNs
consisting of data communication equipment and packet switching equipment.
Equipment with MPLS-TP OAM functionality can meet carrier-class data transmission needs.
ME
MEs represent the entities that require management and are the maintenance points between
two MEPs. All MPLS-TP OAM operations are performed based on MEs.
MEG
A MEG includes different MEs that satisfy the following conditions:
l MEs in a MEG exist in the same management domain.
l MEs in a MEG have the same MEG level.
l MEs in a MEG belong to the same connection.
A MEG ID in an MPLS-TP OAM packet identifies a MEG. Three MEG ID formats are
available:
An ICC-based MEG ID consists of two subfields: the ICC followed by a unique MEG
ID code (UMC). The ICC consists of 1 to 6 left-justified characters. A unique ICC is
assigned to a network carrier and maintained by the ITU-T Telecommunication
Standardization Bureau (TSB). The UMC code immediately follows the ICC and
consists of 7 to 12 characters, with trailing NULLs, completing the 13-byte MEG ID
value.
Each MEG on a carrier network has a unique ID.
l IP format
The IP format is defined by Huawei.
In Table 4-8, Node IP Address refers to the Node ID of an NE. Smaller Node IP Address
or Smaller PW ID refers to the smaller one between the node IP addresses or PW IDs at
the source end and sink end. Bigger Node IP Address or Bigger PW ID refers to the
bigger one between the node IP addresses or PW IDs at the source end and sink end.
Sink end PW ID and Node IP Address must be configured separately.
An IP-based MEG ID consists of a node ID and a tunnel/PW ID, and is generated by the
system automatically.
l User-defined format
A user-defined MEG ID contains a maximum of 96 bits.
Flexible user-defined MEG IDs are used for achieving MPLS-TP OAM on networks that
comprise OptiX equipment and third-party equipment using proprietary MEG ID
formats.
l MEPs and MIPs are called maintenance points (MPs). An MP ID in an OAM packet
identifies an MP. Each MP in a MEG must have a unique MP ID.
If an ICC-based or user-defined MEG ID is used, an MP ID occupies two bytes in an
OAM protocol data unit (PDU). As the three most significant bits of the first byte take
the fixed value of 0, an MP ID actually uses 13 bits and ranges from 1 to 8191.
RDI
AIS
CSF
LCK
Performance monitoring LM
DM
TST
Table 4-10 describes MPLS-TP OAM functions and their application scenarios.
l MPLS label
An MPLS label is the first label encapsulated into an OAM PDU. The EXP field can be
set on an on-demand basis so OAM PDUs can be forwarded based on their priorities. In
LB/LT tests, the TTL field can be used to transmit TTL values.
l Generic associated channel label (GAL)
GAL always takes the value 13.
l Associated channel header (ACH)
ACH content complies with RFC 5586. The channel type field can be set on the NMS.
l OAM PDU
OAM PDU content complies with ITU-T Y.1731. An OAM PDU consists of a header
and a payload area. The header is shared by all OAM PDUs and the payload area is
specific to each OAM PDU.
An MPLS-TP PW OAM PDU includes an MPLS label, a PW label, a GAL label, an ACH
header, and OAM PDU payload.
OptiX RTN 380A/380AX supports OAM packets whose GAL value is 14.
l ACH header
ACH content complies with RFC 5586.
l OAM PDU
OAM PDU content complies with ITU-T Y.1731. An OAM PDU consists of a header
and a payload area. The header is shared by all OAM PDUs and the payload area is
specific to each OAM PDU.
NOTE
If OAM PDUs are encapsulated into PWs, only G-ACHs are required generally and GAL labels are not
required. If PWs do not have control words, GAL labels are required.
4.2.2 Principles
MPLS-TP OAM achieves fault management and performance monitoring using OAM frames
that are interacted between maintenance points.
4.2.2.1 CC
CC is used to detect unidirectional connectivity between any pair of MEPs in MEGs.
A pair of MEPs periodically transmit and receive CCM frames to achieve CC.
Figure 4-16 CC
As shown in Figure 4-16, CC-enabled MEP1 transmits CCM frames, and MEP2 in the same
MEG periodically receives the CCM frames from MEP1. If MEP2 does not receive a CCM
frame within an interval of 3.5 times MEP2's CCM transmission period due to a link failure,
MEP2 reports an LOCV alarm. The LOCV alarm clears after the faulty link recovers.
4.2.2.2 RDI
A maintenance association end point (MEP), upon detecting a defect condition, notifies its
peer MEP of the defect condition. Upon receiving the notification, the peer MEP reports a
remote defect indicator (RDI) alarm.
RDI is a flag in the continuity check message (CCM) frame. It is sent to the peer MEP
through the reverse channel. The working principles are as follows:
l When the local MEP detects a link fault using the continuity check (CC) function, it sets
the RDI flag in a CCM frame to 1 and sends the frame to its peer MEP to notify the peer
MEP of the link fault.
l After the link fault is removed, the local MEP sets the RDI flag in a CCM frame to 0 and
sends the frame to its peer MEP to notify the peer MEP of the link fault removal.
NOTE
The local MEP transmits RDI frames to the peer MEP in the following scenarios:
l The local MEP detects OAM alarms such as LOCV, UNEXPMEG, UNEXPMEP, or
UNEXPPER.
l The local MEP receives AIS frames.
The following takes the local MEP detecting an LOCV alarm as an example to illustrate how
an RDI alarm is reported. As shown in Figure 4-17, MEP2 detects an LOCV alarm and
transmits an RDI frame to MEP1 through the reverse channel. After receiving the RDI frame,
MEP1 reports an RDI alarm.
NOTE
For the process of how the local MEP transmits an RDI frame to the peer MEP after receiving an AIS
frame, see 4.2.2.3 AIS.
4.2.2.3 AIS
A server layer MEP, upon detecting a defect condition, transmits AIS frames to its client layer
MEs, so its client layer MEs suppress alarms following detection of the defect condition at the
server layer. Upon receiving an AIS frame, a client layer MEP reports an AIS alarm.
AIS is classified into tunnel AIS and PW AIS. Tunnel AIS and PW AIS are implemented in a
similar way. Therefore, the following describes only tunnel AIS triggered by a port failure.
As shown in Figure 4-18, MEP1 is created on label switched router (LSR) A, MEP4 is
created on LSR D, MIP2 is created on LSR B, and MIP3 is created on LSR C.
1. After tunnel 1 recovers, the ETH_LOS alarm at LSR B is cleared and LSR B stops
transmitting AIS frames in tunnel 3.
2. If no AIS frame is received within 3.5 consecutive detection periods, LSR D clears the
AIS alarm.
3. If the tunnel AIS is also cleared, LSR D transmits CCM frames with the RDI field being
0 in tunnel 6.
4. Upon receiving the CCM frame with the RDI field being 0, LSR A clears the tunnel RDI
alarm.
NOTE
If bit error detection is enabled on a port, an AIS alarm is also reported when the number of
detected bit error exceeds the threshold.
4.2.2.4 LB
A loopback (LB) test is used to check bidirectional connectivity of links between a
maintenance association end point (MEP) and a maintenance association intermediate point
(MIP) or between a pair of MEPs.
1. The source MEP that initiates an LB test transmits a loopback message (LBM) frame to
the destination node (MEP or MIP). If the destination node is a MIP, a specific TTL
value must be specified. If the destination node is a MEP, the TTL value must be larger
than or equal to the number of hops between the source and destination MEPs. If the
TTL value is smaller than the number of hops, the LBM frame will be extracted and
discarded before it reaches the destination MEP.
2. After receiving the LBM frame, the destination node checks whether the destination MIP
or MEP ID contained in the LBM frame is the same as the local MIP or MEP ID. If yes
and the reverse channel is available, the destination node transmits a loopback reply
(LBR) frame back to the source MEP. If not, the destination node directly discards the
received LBM frame.
3. If the source MEP receives the LBR frame transmitted from the destination node within
the specified period of time, it considers that the destination node is reachable and the
LB test is successful.
NOTE
If both the TTL value and MIP or MEP ID are correctly set but the source MEP does not receive the
LBR frame within the specified period of time, the link is faulty and you can locate the faulty node with
reference to LT.
1. LSR A transmits an LBM frame with the TTL value being 2 and MIP ID being the MIP
ID of LSR C.
2. After the LBM frame reaches LSR B, LSR B decrements the TTL value in the LBM
frame by one and forwards the LBM frame to LSR C as a service frame because the TTL
value after decrement is not 0.
3. After the LBM frame reaches LSR C, LSR C decrements the TTL value in the LBM
frame by one and the TTL value after decrement becomes 0. At this time, LSR C
processes the LBM frame by comparing the MIP ID in the LBM frame with its local
MIP ID. If the two MIP IDs are the same, LSR C transmits an LBR frame back to LSR
A through the reverse channel. If the two MIP IDs are different, LSR C directly discards
the received LBM frame.
4. If LSR A receives the LBR frame transmitted from LSR C within the specified period of
time, it considers that LSR C is reachable and the LB test is successful.
4.2.2.5 LT
An LT test is achieved by a series of LB tests that are implemented from near to far. It is used
to obtain the adjacency relationship between a MEP and a MIP or between a pair of MEPs
and to locate the link or device fault between the two.
1. The source MEP initiates the first loopback (LB) test. It transmits a loopback message
(LBM) frame with the TTL value being 1 and destination MIP or MEP ID being the MIP
or MEP ID of the node that is the most nearest to the source MEP. If the source MEP
receives the loopback reply (LBR) frame from the destination node of this LB test, it
considers that the first hop is reachable.
2. The source MEP initiates the second LB test. It transmits an LBM frame with the TTL
value being 2 and destination MIP or MEP ID being the MIP or MEP ID of the node that
is the second nearest to the source MEP. If the source MEP receives the LBR frame from
the destination node of this LB test, it considers that the second hop is reachable.
3. The source MEP repeats the preceding process until it finds that one hop is unreachable
or reaches the destination node of the LT test. Then the source MEP lists the reachable
nodes from near to far to obtain the farthest reachable path from the source MEP to the
destination node of the LT test.
1. LSR A first initiates an LB test to its most nearest node LSR B by transmitting an LBM
frame with TTL 1 and SN 1. If LSR A receives an LBR frame from LSR B, LSR A
considers that the link between it and LSR B is normal, and increments the SN in the
LBM frame by one. If LSR A does not receive an LBR frame from LSR B, LSR A
considers that the link between it and LSR B is faulty.
2. LSR A then initiates an LB test to its second nearest node LSR C by transmitting an
LBM frame with TTL 2 and SN 2. If LSR A receives an LBR frame from LSR C, LSR
A considers that the link between LSR B and LSR C is normal, and increments the SN in
the LBM frame by one. If LSR A does not receive an LBR frame from LSR C, LSR A
considers that the link between LSR B and LSR C is faulty.
3. LSR A finally initiates an LB test to LSR D by transmitting an LBM frame with TTL 3
and SN 3. If LSR A receives an LBR frame from LSR D, LSR A considers that the link
between LSR C and LSR D is normal, and increments the SN in the LBM frame by one.
If LSR A does not receive an LBR frame from LSR D, LSR A considers that the link
between LSR C and LSR D is faulty.
4. LSR A lists the reachable nodes from near to far to obtain the path to LSR D.
4.2.2.6 LM
LM is used to count lost packets on a tunnel or PW within a specified period of time.
NOTE
LM can be performed in two ways: dual-ended LM and single-ended LM. Currently, the OptiX OSN
equipment supports single-ended LM only. To learn about dual-ended LM, see ITU-T Y.1731, OAM
functions and mechanisms for Ethernet based networks.
Single-ended LM
Single-ended LM is used for on-demand OAM. That is, a single-ended LM test is manually
triggered. In this mode, a local MEP, within a specified period of time, periodically sends
packets with LM request (LMM) information to its opposite MEP, and receives packets with
LM reply (LMR) information from its opposite MEP.
NOTE
NOTE
A maintenance intermediate point (MIP) transparently transmits packets with LMM and LMR
information, without the need to support LM.
NOTE
The following considers MEP (PE1) as an example to illustrate the single-ended LM process. The same
process goes to MEP (PE2).
1. A local MEP (PE1) periodically sends an LMM frame to its opposite MEP (PE2). An
LMM frame contains the following values:
– TxFCf: value of local counter TxFCl at the time of LMM frame transmission
2. When receiving a LMM frame, PE2 transmits an LMR frame. An LMR frame contains
the following values:
– TxFCf: value of TxFCf copied from the LMM frame
– RxFCf: value of local counter RxFCl at the time of LMM frame reception
– TxFCb: value of local counter TxFCl at the time of LMR frame transmission
3. Upon receiving an LMR frame, PE1 uses the following values to make near-end and far-
end loss measurements:
– Frame lossfar-end = |TxFCf[tc] - TxFCf[tp]| - |RxFCf[tc] - RxFCf[tp]|
– Frame lossnear-end = |TxFCb[tc] - TxFCb[tp]| - |RxFCl[tc] - RxFCl[tp]|
NOTE
l TxFCf[tc], RxFCf[tc], and TxFCb[tc] represent the received LMR frame's TxFCf, RxFCf, and
TxFCb respectively. RxFCl[tc] represents the local counter RxFCl value at the time this LMR
frame was received, where tc is the reception time of the current LMR frame.
l TxFCf[tp], RxFCf[tp], and TxFCb[tp] represent the previous LMR frame's TxFCf, RxFCf, and
TxFCb respectively. RxFCl[tp] represents the local counter RxFCl value at the time the
previous LMR frame was received, where tp is the reception time of the previous LMR frame.
FLR
FLR is a measure of the packet loss ratio between two MEPs that belong to the same CoS
instance on a point-to-point connection. During the LM, a local MEP counts lost packets, and
records the total number of transmitted packets.
FLR is calculated as follows.
FLR = Frame loss/Total number of transmitted packets
4.2.2.7 DM
Two-way DM is used to measure frame delay and frame delay variation of bidirectional data
frames on a link within a specified period of time.
NOTE
DM can be performed in two ways: two-way DM and one-way DM. Currently, the OptiX OSN
equipment supports two-way DM only. To learn about one-way DM, see ITU-T Y.1731, OAM functions
and mechanisms for Ethernet based networks.
Two-Way DM
Two-way DM is used for on-demand OAM. That is, a two-way DM test is manually
triggered. In this mode, a local MEP, within a specified period of time, periodically sends
packets with DM request (DMM) information to its opposite MEP, and receives packets with
DM reply (DMR) information from its opposite MEP.
NOTE
An MIP transparently transmits packets with DMM and DMR information, without the need to support
DM.
The following considers MEP (PE1) as an example to illustrate the two-way DM process. The same
process goes to MEP (PE2).
1. A local MEP (PE1)periodically sends a DMM frame to its opposite MEP (PE2). A
DMM frame contains the following values:
– TxTimeStampf: time of DMM frame transmission
2. When receiving a DMM frame, PE2 transmits a DMR frame. An DMR frame contains
the following values:
– TxTimeStampf: value of TxTimeStampf copied from the DMM frame
– RxTimeStampf: time of DMM frame reception
– TxTimeStampb: time of DMR frame transmission
3. Upon receiving a DMR frame, PE1 uses the following values to make frame delay
measurements:
– Frame delay = RxTimeb - TxTimeStampf (RxTimeb represents the reception time
of the DMR frame.)
This value contains the time the opposite node handles the DM packet, and serves
as input for frame delay variation measurement.
– Frame delay = (RxTimeb - TxTimeStampf) - (TxTimeStampb - RxTimeStampf)
This value does not contain the time the opposite node handles the DM packet, and
is more accurate.
FDV
FDV is a measure of the delay variations of service packets between two MEPs that belong to
the same CoS instance on a point-to-point connection. During the DM, a local MEP measures
frame delays, and records the maximum frame delay and minimum frame delay.
FDV is calculated as follows.
FDV = |Frame delaymax - Frame delaymin|
4.2.2.8 CSF
When an attachment circuit (AC) failure occurs, the Client Signal Fail (CSF) function enables
a local maintenance entity group end point (MEP) to notify its peer MEP of the failure. The
peer MEP then generates a CSF alarm.
In the PW OAM mechanism, upon detecting an AC failure, a CSF-enabled MEP sends PW
OAM CSF packets to its peer MEP. The peer MEP reports an MPLS_PW_CSF alarm upon
receiving the packets.
Local AC failures include:
l Failure that triggers an ETH_LOS alarm
l IEEE 802.3ah negotiation failure
l Failure that triggers a BD_STATUS alarm
As illustrated in Figure 4-23, MEP1 is the local MEP, and MEP2 is its peer MEP.
After the AC link between the NodeB and MEP1 fails:
1. MEP1 periodically sends PW CSF packets to MEP2 upon detecting a BD_STATUS
alarm, an ETH_LOS alarm, or an IEEE 802.3ah negotiation failure.
2. MEP2 reports an MPLS_PW_CSF alarm upon receiving the PW CSF packets.
After the AC link between the NodeB and MEP1 recovers:
1. MEP1 stops sending PW CSF packets.
2. If MEP2 does not receive any PW CSF packets within 3.5 consecutive periods of
transmitting PW CSF packets, MEP2 considers the AC link between the NodeB and
MEP1 recovered and clears the MPLS_PW_CSF alarm.
4.2.2.9 LCK
When a local service-layer MEP is administratively locked and services are interrupted, the
locked signal function (LCK) enables the local service-layer MEP to notify the remote client-
layer MEP and then LOC alarms at the client layer are suppressed.
The LCK function is applicable to the tunnel layer and PW layer. The principles in different
scenarios are detailed as follows.
As shown in Figure 4-24, MEP1 and MEP2 are created for the tunnel on LSR A and LSR B
respectively.
LCK is implemented on the tunnel as follows:
1. MEP1 performs LCK for the tunnel.
2. MEP1 reports an MPLS_Tunnel_LOCK alarm.
As shown in Figure 4-25, MEP1 and MEP2 are created for tunnel 1 on LSR A and LSR B
respectively; MEP3, MEP4, and MIP5 are created for the MS-PW on LSR A, LSR C, and
LSR B respectively.
LCK is implemented on the tunnel as follows:
1. MEP2 performs LCK for tunnel 1.
2. MEP2 reports an MPLS_Tunnel_LOCK alarm.
3. MEP2 sends a PW LCK packet to MEP4.
4. After receiving the PW LCK packet, MEP4 suppresses the MPLS_PW_LOCV alarm on
PW 2 and reports an MPLS_PW_LCK alarm.
LCK Applied on a PW
As shown in Figure 4-26, MEP1 and MEP2 are created for the PW on LSR A and LSR B
respectively.
LCK is implemented on the PW as follows:
1. MEP1 performs LCK for the PW.
2. MEP1 reports an MPLS_PW_LOCK alarm.
4.2.2.10 TST
The test (TST) function is used to perform one-way on-demand diagnostic tests on MPLS
tunnels or PWs, including measuring packet loss ratios.
TST can work in in-service or out-of-service mode. Out-of-service TST interrupts services.
TST can be used to measure packet loss ratios on MPLS tunnels or PWs. The following
details principles of TST applied on MPLS tunnels and PWs.
As illustrated in Figure 4-27, MEP1 and MEP2 are created for the tunnel between them on
LSR A and LSR B respectively.
TST Applied on a PW
As illustrated in Figure 4-28, MEP1 and MEP2 are created for the PW between them on LSR
A and LSR B respectively.
LSR A and LSR B with a tunnel in between, as shown in Figure 4-29, support MPLS OAM
(based on ITU-T Y.1711) and can be smoothly upgraded to support MPLS-TP OAM (based
on ITU-T Y.1731).
1. MPLS OAM (based on ITU-T Y.1711) is enabled on both LSR A and LSR B.
2. If LSR A is upgraded to support MPLS-TP OAM but LSR B is not, LSR A
automatically generates an IP-based MEG ID, MP ID, and peer MP ID based on its node
ID and tunnel ID, and transmits a CCM frame to LSR B at a period equal to or
approximately equal to the preset fast failure detection (FFD)/connectivity verification
(CV) period.
LSR B can identify the CCM frame and check whether the combination of the MEG ID
and MP ID in the CCM frame is consistent with the expected trail termination source
identifier (TTSI). If they are consistent, LSR B considers that a CV/FFD packet with the
expected TTSI is received and does not report an OAM alarm.
LSR A also can identify FFD/CV packets from LSR B and check whether the TTSI in an
FFD/CV packet is consistent with the expected combination of MEG ID and MP ID. If
they are consistent, LSR A considers that a CCM frame with the expected MEG ID and
MP ID is received and does not report an OAM alarm.
3. After LSR B is upgraded to support MPLS-TP OAM, both LSR A and LSR B perform
CC based on the MPLS-TP CC mechanism and achieve a smooth upgrade from MPLS
OAM (based on ITU-T Y.1711) to MPLS-TP OAM (based on ITU-T Y.1731).
l ITU-T Y.1731 OAM functions and mechanisms for Ethernet based networks
l ITU-T G.8110.1 Architecture of MPLS-TP Layer Network 2011.02 (Consent)
l ITU-T G.8113.1 Operations, Administration and Maintenance mechanism for MPLS-TP
networks (G.tpoam) 2011.02 (Consent)
l ITU-T G.8131 Linear protection switching for MPLS transport profile (MPLS-TP)
network 2011.02 (Draft)
l Draft-ietf-mpls-tp-oam-analysis 2011.06
l Draft-bhh-mpls-tp-oam-y1731 2010.08
4.2.4 Specifications
This section describes the specifications of MPLS-TP OAM.
Item Specifications
Item Specifications
Format of MPLS-TP tunnel OAM packets G-ACH—based format with the GAL being
13
Feature Updates
Version Description
Self-limitations
Item Description
MPLS-TP section OAM MPLS-TP section OAM is not supported in this version.
PWE3 services using If PWE3 services use control words, MPLS-TP PW OAM
control words packets do not necessarily carry generic associated channel
header labels (GALs). Otherwise, MPLS-TP PW OAM
packets must carry GALs.
Table 4-13 Dependencies and limitations between MPLS-TP OAM and other features
Feature Description
MPLS APS and PW APS Both MPLS automatic protection switching (APS) and PW
APS can be triggered based on MPLS-TP OAM.
4.2.8 FAQs
This section answers the questions that are frequently asked when MPLS-TP OAM is used.
None.
4.3.1 Introduction
This section defines Multiprotocol Label Switching (MPLS) automatic protection switching
(APS) and describes the purpose of this feature.
Definition
MPLS APS is a function that protects MPLS tunnels based on the APS protocol. With this
function, when a working tunnel is faulty, services can be switched to the preconfigured
protection tunnel.
The MPLS APS function supported by the OptiX RTN 380A/380AX has the following
features:
Purpose
MPLS APS improves reliability of service transmission over tunnels.
As shown in Figure 4-30, when the MPLS OAM mechanism detects a fault in the working
tunnel, the service is switched to the protection tunnel for transmission.
Working Tunnel
Ingress Egress
Protection Tunnel
Transit
Protect switching
Transit
Working Tunnel
Ingress Egress
Protection Tunnel
Transit
Service
Protection Mechanism
MPLS APS is classified into 1+1 protection and 1:1 protection by protection mechanism.
l 1+1 protection
Normally, the transmit end transmits services to the working tunnel and protection
tunnel, and the receive end receives services from the working tunnel. When the working
tunnel is faulty, the receive end receives services from the protection tunnel.
l 1:1 protection
Normally, services are transmitted in the working tunnel. The protection tunnel is idle.
When the working tunnel is faulty, services are transmitted in the protection tunnel.
NOTE
OptiX RTN 380A/380AX supports only 1:1 dual-ended revertive protection.
Switching Mode
MPLS APS is classified into the single-ended switching and the dual-ended switching by
switching mode.
l Single-ended switching
In single-ended switching mode, the switching occurs only at one end and the state of the
other end remains unchanged.
l Dual-ended switching
In dual-ended switching mode, the switching occurs at both ends at the same time.
Revertive Mode
MPLS APS is classified into the revertive mode and the non-revertive mode by revertive
mode.
l Revertive mode
In revertive mode, the service is automatically switched back to the working tunnel after
the working tunnel is restored and the normal state lasts for a certain period. The period
after the working tunnel is restored and before the service is switched back to the
working tunnel is called the wait-to-restore (WTR) time. To prevent frequent switching
events due to an unstable working tunnel, the WTR time is generally 5 to 12 minutes.
l Non-revertive mode
In non-revertive mode, the service is not automatically switched back to the working
tunnel even after the working tunnel is restored. However, the service will be switched
back if the protection tunnel fails or an external command triggers protection switching.
NOTE
If two switching conditions exist at the same time, the higher-priority switching condition preempts the
other one.
Clear switching The This command clears all the other external switching
(external condit operations.
switching) ions
are
arrang
ed in
Lockout of the desce If the protection tunnel is locked out, services cannot be
protection nding switched from the working tunnel to the protection tunnel. If
channel order services are already switched to the protection tunnel, the
(external of command forcibly switches the services back to the working
switching) priorit tunnel even if the working tunnel does not recover. Therefore,
y. services may be interrupted.
SF-P switching The signal fail for protection (SF-P) condition indicates that
(automatic the protection tunnel fails. If the protection tunnel fails, the
switching) services carried by the protection tunnel are automatically
switched back to the working tunnel.
NOTE
An optional condition can trigger MPLS APS SF switching only after it is selected. By default, the
alarms in the preceding table do not trigger MPLS APS SF switching.
4.3.2 Principles
MPLS APS uses the MPLS OAM mechanism to detect faults in tunnels, and the ingress and
egress nodes exchange APS protocol packets for protection switching.
Before Switching
l Both the ingress and egress nodes transmit service packets through the working tunnel.
l Both the ingress and egress nodes receive service packets from the working and
protection tunnels. Since the protection tunnel does not transmit service packets, the
ingress and egress nodes actually receive service packets from the working tunnel.
l Both the ingress and egress nodes use MPLS OAM or MPLS-TP OAM to check the
connectivity of each MPLS tunnel.
During Switching
Figure 4-31 and Figure 4-32 show the single-ended switching on the egress node when the
forwarding working tunnel is faulty.
Figure 4-31 Principle of the single-ended switching (after the switching on the egress node)
Ingress Egress Ingress Egress
Forward working Forward working
tunnel tunnel
Forward protection Forward protection
tunnel tunnel
Reverse Reverse
Switching
working tunnel working tunnel
Reverse protection Reverse protection
tunnel tunnel
1. When detecting a fault, the egress node switches from the reverse working tunnel to the
reverse protection tunnel, and transmits service packets through the reverse protection
tunnel. In addition, the egress node transmits BDI or RDI packets to the ingress node.
2. Single-ended switching occurs on the ingress node if BDI switching is enabled on the
ingress node. That is, the ingress node switches from the forward working tunnel to the
forward protection tunnel, and transmits service packets through the forward protection
tunnel.
3. Both the ingress and egress nodes receive service packets from the working and
protection tunnels. After switching, service packets are transmitted through the
protection tunnel. Therefore, the ingress and egress nodes actually receive service
packets from the protection tunnel. See Figure 4-32.
Figure 4-32 Principle of the single-ended switching (after the switching on the ingress node)
Ingress Egress
Forward working
tunnel
Forward protection
tunnel
Reverse
working tunnel
Reverse protection
tunnel
Service
Forward protection tunnel label
Reverse protection tunnel label
After Switching
If MPLS APS 1:1 single-ended switching is in revertive mode, the service in the protection
tunnel is switched back to the normal working tunnel after the WTR time elapses.
Before Switching
l The ingress and egress nodes exchange APS protocol packets over the protection tunnel,
and then they are aware of the status of each other. When the working tunnel is found
faulty, the ingress and egress nodes can perform the protection switching, switching
hold-off, and wait-to-restore (WTR) functions. In this case, the request state of the APS
protocol packet should be No Request.
l The MPLS OAM or MPLS-TP OAM mechanism is used to perform unidirectional
continuity checks on all the tunnels.
During Switching
Figure 4-33 shows the principle of the dual-ended switching, assuming a fault in the forward
working tunnel.
Reverse Reverse
Switching
working tunnel working tunnel
Reverse Reverse
protection tunnel protection tunnel
– The egress node modifies the MPLS tunnel that the FEC travels through. That is,
the tunnel that the FEC travels through is changed from the reverse working tunnel
to the reverse protection tunnel. In this case, the packet in the FEC encapsulates the
MPLS label corresponding to the reverse protection tunnel so that the service can be
bridged to the reverse protection tunnel. Meanwhile, the egress node sends the APS
protocol packet carrying a switching request to the ingress node.
NOTE
l "Bridging" means that the equipment transmits the service to the protection tunnel instead of the
working tunnel.
l "Switching" means that the equipment receives the service from the protection tunnel instead of the
working tunnel.
2. On the reception of the APS protocol packet carrying a switching request, the ingress
node performs the following operations:
– The ingress node modifies the MPLS tunnel that the FEC travels through. That is,
the tunnel that the FEC travels through is changed from the forward working tunnel
to the forward protection tunnel. In this case, the packet in the FEC encapsulates the
MPLS label corresponding to the forward protection tunnel so that the service can
be bridged to the forward protection tunnel.
– The ingress node receives the service from the reverse protection tunnel instead of
the reverse working tunnel.
3. The service is transmitted in the forward and reverse protection tunnels.
After Switching
If MPLS APS 1:1 dual-ended switching is in revertive mode, the service is switched back to
the normal forward and reverse working tunnels after the WTR time elapses.
The following standards and protocols are associated with MPLS APS:
4.3.4 Specifications
This section describes the specifications of MPLS APS.
Item Specifications
Item Specifications
Feature Updates
Version Description
Self-limitations
Table 4-18 Dependencies and limitations between MPLS APS and other features
Feature Description
MPLS-TP tunnel OAM In MPLS APS, the MPLS-TP tunnel OAM mechanism can
be used to detect faults.
l In an MPLS APS protection group, the working and protection tunnels have the same
ingress and egress nodes.
l The working and protection tunnels share the minimum number of nodes.
l If multiple MPLS APS protection groups are required on a ring network, it is
recommended that half of the working tunnels be configured on the upper part of the ring
and half of the working tunnels be configured on the lower part of the ring. In this
manner, traffic is evenly distributed, and network-wide switching caused by one
interrupted MPLS link can be prevented.
l If the MPLS-TP tunnel OAM mechanism is used to detect faults:
– Carrier IDs, maintenance entity groups (MEGs), and MPs must be correctly
planned.
– It is recommended that CCMs be sent at an interval of 3.3 ms. If the packet
transmission delay variation exceeds 3.3 ms, the CCM transmission interval must
be greater than the packet transmission delay variation.
– MPLS-TP tunnel OAM must be enabled for both working and protection tunnels.
l Unless otherwise specified, the wait-to-restore (WTR) time and hold-off time take the
default values.
4.3.8 FAQs
This section answers questions that are frequently raised when MPLS APS is used.
Question: What should be done when MPLS APS protection switching fails?
1. Check the configurations of the MPLS APS protection group. The configurations of the
MPLS APS protection group at both ends of the link should be consistent.
2. If the configurations are inconsistent, reconfigure the MPLS APS protection group. After
the MPLS APS protection group is reconfigured at both ends, deactivate and then
activate the MPLS APS protection group.
Question: Why must the transmission period of FFD/CCM packets be 3.3 ms to support
MPLS APS?
Answer: Because the time of detecting a fault in an LSP is shorter and the protection
switching time can be less than 100 ms.
Question: What precautions should be taken to delete MPLS APS protection groups?
Answer: Disable the MPLS APS protection groups at both ends of a link before deleting
them.
5 PWE3 Features
5.1.1 Introduction
This section describes the definition, application, and basic concepts of PWE3.
Definition
PWE3 is an L2 service bearer technology that emulates the basic behaviors and characteristics
of services such as ATM/IMA, Ethernet, and TDM services on a packet switched network
(PSN).
Aided by the PWE3 technology, conventional networks can be connected by a PSN, thereby
enabling resources to be shared and the network to be scaled.
Purpose
PWE3 aims to transmit various services such as ATM, Ethernet, and TDM services over a
PSN. Figure 5-1 shows a typical PWE3 application. The Ethernet, ATM, and TDM services
between NodeBs and RNCs are emulated by PWE3 on NE1 and NE2, and then are
transmitted on the PWs between NE1 and NE2.
PSN
NodeB
RNC
PW1
PW2
NE1 MPLS tunnel NE2
NodeB RNC
Ethernet, ATM, TDM Ethernet, ATM, TDM
Emulated service
PW
PE1 PE2
PW1
CE1 CE2
PW2
AC AC
NOTE
In the network reference model, PWs are carried in a PSN tunnel; that is, a single-segment PW (SS-
PW).
The concepts found in the network reference model shown in Figure 5-2 are defined as
follows.
CE
A CE is a device where one end of a service originates and/or terminates. The CE is not aware
that it is using an emulated service rather than a native service.
PE
A PE is a device that provides PWE3 to a CE. Located at the edge of a network, a PE is
connected with a CE through an AC.
In the PWE3 network reference model, the mapping relationship between an AC and a PW is
determined once a PW is created between two PEs. As a result, Layer 2 services on CEs can
be transmitted over a PSN.
AC
An AC is a physical or virtual circuit attaching a CE to a PE. An AC can be, for example, an
Ethernet port, a VLAN, or a TDM link.
PW
A PW is a mechanism that carries emulated services from one PE to another PE over a PSN.
By means of PWE3, point-to-point channels are created, separated from each other. Users'
Layer 2 packets are transparently transmitted on a PW.
PWs are available in two types depending on whether signaling protocols are used or not.
Specifically, a PW that does not use signaling protocols is called a static PW, whereas a PW
that does use signaling protocols is called a dynamic PW.
NOTE
Tunnel
A tunnel provides a mechanism that transparently transmits information over a network. In a
tunnel, one or more PWs can be carried. A tunnel connects a local PE and a remote PE for
transparently transmitting data.
PSN tunnels are available in several types, but the OptiX RTN 380A/380AX supports only
MPLS tunnels. In this document, PWE3 is generally based on the MPLS tunnel (LSP), unless
otherwise specified.
Forwarder Pre-processing
Emulated
Native Service Service
Processing (TDM,ATM, Emulated Service
Ethernt,etc)
Payload
Encapsulation Pseudo Wire
Service
Interface PW
(TDM,ATM, Demultiplexer
Ethernet,etc) PSN Tunnel,
PSN & PSN Tunnel
Physical
Headers
To CE To PSN
Physical Physical
In the PWE3 protocol reference model, pre-processing involves the native service processing
layer and forwarder layer, whereas protocol processing involves the encapsulation layer and
demultiplexer layer. The main functions of these layers are described as follows.
Forwarder
A forwarder selects the PW for the service payloads received on an AC. The mapping
relationships can be specified in the service configuration, or implemented through certain
types of dynamically configured information.
PW Demultiplexer Layer
The PW demultiplexer layer enables one or more PWs to be carried in a single PSN tunnel.
Figure 5-4 shows the generic PWE3 encapsulation format. A PWE3 packet contains the
MPLS label, control word, and payload.
EX TT
S
PW label P
EXP S L
TTL
Control Word
Laye 2
r PDU
Payload
MPLS label
Control word
Payload
MPLS Label
MPLS labels include tunnel labels and pseudo wire (PW) labels, which are used to identify
tunnels and PWs respectively. The format of tunnel labels is the same as that of PW labels.
For details, see 4.1.1.5 MPLS Label.
Control Word
The control word is a 4-byte packet header used to carry packet information over an MPLS
PSN.
The control word is used to check the packet sequence, to fragment packets, and to restructure
packets. As shown in 5.2.1.2 Format of an ETH PWE3 Packet, the specific format of the
control word is determined by the service type carried by PWE3 and the encapsulation mode
adopted.
Payload
Payload indicates the service payload in a PWE3 packet.
5.1.1.5 MS-PW
A PW that is carried in a PSN tunnel is called a single-segment PW (SS-PW). If a PW is
carried in multiple PSN tunnels, the PW is called a multi-segment PW (MS-PW).
Emulated service
MS-PW
PSN PSN
Native tunnel tunnel Native
service 1 2 service
CE1 CE2
PW2 PW4
AC AC
PW switching point
NOTE
PSN tunnels are available in several types, but the OptiX RTN 380A/380AX supports only MPLS
tunnels. In this document, PWE3 is based on MPLS tunnels (LSPs), unless otherwise specified.
In the preceding network reference model, T-PE1 and T-PE2 provide PWE3 services to CE1
and CE2. The PWs are carried in two PSN tunnels, and constitute the MS-PW.
The two tunnels (PSN tunnel 1 and PSN tunnel 2) that are used to carry PWs reside in
different PSN domains. PSN tunnel 1 extends from T-PE1 to S-PE1, and PSN tunnel 2
extends from S-PE1 to T-PE2. Labels of PW1 carried in PSN tunnel 1 and PW3 carried in
PSN tunnel 2 are swapped at S-PE1. Similarly, labels of PW2 carried in PSN tunnel 1 and
PW4 carried in PSN tunnel 2 are swapped at S-PE1.
MS-PW Application
Compared with the SS-PW, the MS-PW has the following characteristics:
The following paragraphs and figures compare the application scenarios of the SS-PW and
MS-PW to show that it is easier for the MS-PW to implement segment-based protection for
tunnels.
Figure 5-6 shows the SS-PW networking mode. The services between PE1 and PE2 are
transmitted on PW1 carried in MPLS tunnel 1. Both MPLS tunnel 1 and MPLS tunnel 2 are
configured with 1:1 protection. Protection, however, fails to be provided if disconnection
faults occur on different sides of the operator device (called the P device).
PW1 PW1
PE1 P PE2
PW1 PW1
MPLS tunnel 2
Packet transmission equipment
NOTE
The PWs are invisible to the P device on a PSN; the P device provides transparent transport in tunnels.
Figure 5-7 shows the MS-PW networking mode. The services between T-PE1 and T-PE2 are
transmitted on PW1 carried in MPLS tunnel 1 and PW2 carried on MPLS tunnel 2. The
paired tunnels (MPLS tunnel 1 and MPLS tunnel 3; MPLS tunnel 2 and MPLS tunnel 4) are
configured with 1:1 protection. In this configuration, protection can still be provided even
when disconnection faults occur on different sides of the S-PE1 device.
PW1 PW2
T-PE1 S-PE1 T-PE2
MPLS tunnel 3 MPLS tunnel 4
5.1.1.6 VCCV
As specified in IETF RFC5085, virtual circuit connectivity verification (VCCV) is an end-to-
end fault detection and diagnostics mechanism for a PW. The VCCV mechanism is, in its
simplest description, a control channel between a PW's ingress and egress points over which
connectivity verification messages can be sent. The OptiX RTN 380A/380AX supports
VCCV that uses the control word as the control channel and the LSP ping as the verification
method.
The VCCV messages are exchanged between PEs to verify connectivity of PWs. To ensure
that VCCV messages and PW packets traverse the same path, VCCV messages must be
encapsulated in the same manner as PW packets and be transmitted in the same tunnel as the
PW packets.
VCCV messages have the following formats.
EX TT
S
PW label P
EXP S L
TTL
EX TT
S
PW label P
EXP S L
TTL
The main fields in a VCCV message based on OAM alert label are defined as follows:
l Label: The value of this field is 14 and indicates an OAM packet.
l Time to Live (TTL): The value of this field is set to 1, to ensure that the MPLS OAM
packet is not transmitted beyond the sink end of the monitored LSP.
The payloads are MPLS echo packets encapsulated in IPv4 UDP.
5.1.2 Principles
The SS-PW and MS-PW use different packet forwarding mechanisms.
PSN
MPLS tunnel
AC PW1 PW1 AC
Payload
PW label
A Tunnel label A
B Tunnel label B
NOTE
The PWs are invisible to the P device on a PSN; the P device provides transparent transport in tunnels.
PSN
Tunnel 1 Tunnel 2
AC AC
PW1 PW2
Payload
A PW label A
B PW label B
A Tunnel label A
B Tunnel label B
Packet transmission equipment
The T-PE in the MS-PW networking mode forwards packets in the same manner as PE in the
SS-PW networking mode. In the MS-PW networking mode, S-PE needs to swap the tunnel
label and PW label.
The S-PE device (S-PE1) forwards packets as follows:
When PWE3 packets transmitted from PE1 to PE2 traverse the P device, the tunnel label in
the packets is swapped. That is, tunnel label A is changed to tunnel label B. In addition, the
PW label in the packets is swapped. That is, PW label A is changed to PW label B.
5.1.4 Specifications
This section describes the specifications of PWE3.
Table 5-1 lists the specifications of PWE3.
Item Specifications
VCCV Supported
PW APS Supported
NOTE
For details, see 5.3.4 Specifications.
HQoS Supported
Feature Updates
Version Description
Self-limitations
None
Table 5-2 Dependencies and limitations between PWE3 and other features
Feature Description
5.1.8 FAQs
This section provides answers to the questions that are frequently raised when PWs are used.
Question: Does the PWE3 technology provided by the OptiX RTN 380A/380AX support
packet fragmentation and restructuring that are specified in RFC 4623?
Answer: No. It does not support packet fragmentation or restructuring that are specified in
RFC 4623.
5.2.1 Introduction
Definition
The ETH PWE3 technology emulates the basic behaviors and characteristics of Ethernet
services on a packet switched network (PSN) by using the PWE3 mechanism, so that the
emulated Ethernet services can be transmitted on a PSN.
Purpose
ETH PWE3 aims to transmit Ethernet services over a PSN. Figure 5-12 shows the typical
application of ETH PWE3.
PSN
PW
CE1 AC AC
LSP PE2 CE2
(NodeB) PE1
(RNC)
Native Native
Ethernet ETH PWE3 Ethernet
service service
Packet Format
Figure 5-13 shows the format of an ETH PWE3 packet, consisting of the MPLS label, control
word, and payload.
EX TT
S
PW label P
EXP S L
TTL
Payload
(Ethernet Frame)
MPLS label
Control word (Optional)
Payload
MPLS Label
MPLS labels include tunnel labels and PW labels, which are used to identify tunnels and PWs
respectively. The format of the tunnel label is the same as that of the PW label. For details, see
4.1.1.5 MPLS Label.
Control Word
The 4-byte control word within an ETH PWE3 packet is optional and contains the following
fields:
l 0000: This field indicates the first 4 bits and they must be set to 0.
l Reserved: This field has a length of 12 bits and is reserved.
l Sequence number: This field has a length of 16 bits and indicates the delivery sequence
number of an ETH PWE3 packet. Its initial value is random, and is increased by one
integer with each ETH PWE3 packet sent.
Payload
The payload refers to the Ethernet frame that is encapsulated into an ETH PWE3 packet. One
ETH PWE3 packet can encapsulate only one Ethernet frame. During the encapsulation, the
preset PW Encapsulation Mode is adopted.
Service-Delimiting Tag
The service-delimiting tag is used to indicate the user access mode, that is, the encapsulation
mode when the Ethernet service is received by the AC. Service-delimiting tags are classified
into two categories:
l User
If the service-delimiting tag is User, the user access mode is Ethernet. In this case, the
Ethernet frame that the CE sends to the PE does not carry a provider-tag (P-Tag). If the
frame header contains the VLAN tag, the VLAN tag is the inner VLAN tag of the user
packet, which is called user-tag (U-Tag). The PE does not identify or process a U-Tag.
l Service
If the service-delimiting tag is Service, the user access mode is VLAN. In this case, the
Ethernet frame that the CE sends to the PE carries a provider-tag (P-Tag), which is
provided for the carrier to differentiate users. The PE identifies and processes a P-Tag
based on the PW encapsulation mode.
NOTE
PW Encapsulation Mode
The PW encapsulation mode is used to indicate whether a P-Tag is added when an Ethernet
frame is encapsulated into an ETH PWE3 packet. The PW encapsulation modes are classified
into two categories:
l Raw mode
In this mode:
– When the service-delimiting tag is User, in the direction that an Ethernet frame
enters the PW, the PE directly encapsulates the Ethernet frame into a PWE3 packet
after receiving it from the AC; in the direction that an Ethernet frame leaves the
PW, the PE decapsulates the Ethernet frame before transmitting it to the AC.
– When the service-delimiting tag is Service, in the direction that an Ethernet frame
enters the PW, the PE strips the outer tag (P-Tag) if it exists and encapsulates the
Ethernet frame into a PWE3 packet after receiving it from the AC; in the direction
that an Ethernet frame leaves the PW, the PE decapsulates the Ethernet frame and
adds a P-Tag before transmitting it to the AC.
l Tagged mode
In this mode:
– When the service-delimiting tag is User, in the direction that an Ethernet frame
enters the PW, the PE adds a P-Tag and encapsulates the Ethernet frame into a
PWE3 packet after receiving it from the AC (the added P-Tag is called request
VLAN); in the direction that an Ethernet frame leaves the PW, the PE decapsulates
the Ethernet frame and strips the P-Tag before transmitting it to the AC.
– When the service-delimiting tag is Service, in the direction where an Ethernet
frame enters the PW, the PE replaces the U-tag with a P-tag and encapsulates the
Ethernet frame into a PWE3 packet after receiving it from the AC; in the direction
where an Ethernet frame leaves the PW, the PE decapsulates the Ethernet frame and
replaces the P-tag with a U-tag before transmitting it to the AC.
l The RNC can process S-VLAN tags. It allocates an S-VLAN ID to each NodeB to
separate the services of a NodeB from those of another.
l The NodeB can process C-VLAN tags only. It allocates an C-VLAN ID to each type of
service on a NodeB.
Therefore, the request VLAN function must be enabled to add S-VLAN IDs to isolate the
services on different NodeBs.
l If the PW1 encapsulation mode of NE1 is the tagged mode, set the request VLAN to
100; if PW2 encapsulation mode of NE1 is the tagged mode, set the request VLAN to
200.
l The PW1 and PW2 encapsulation mode of NE2 is the raw mode.
l Both NE1 and NE2 have the service-delimiting tag User.
l In the service uplink direction, to transmit the service of NodeB 1 from NE1 to PW1,
NE1 adds the request VLAN (S-VLAN) 100 to the service because the PW
encapsulation mode is the tagged mode; to transmit the service from NE2 to the RNC,
NE2 decapsulates the service packet and transparently transmits the S-VLAN tag (100).
Likewise, the service of NodeB 2 carries an S-VLAN tag (200) when transmitted from
NE2 to the RNC. In this case, the services at the same port (PORT1) are isolated.
l In the service downlink direction, to transmit the service of the RNC from NE2 to PW1,
NE2 adds the S-VLAN tag to the service because the PW encapsulation mode is the raw
mode; to transmit the service from NE1 to NodeB 1, NE1 decapsulates the service
packet and strips the S-VLAN tag. Likewise, the service of the RNC does not carry an S-
VLAN tag when transmitted from NE1 to NodeB 2.
PORT 1
C-VLAN: 100-200
PSN S-VLAN: 100
NodeB 1 AC
AC PW1
PW2 AC
AC NE1 LSP RNC
NE2
S-VLAN: 200
C-VLAN: 100-200
NodeB 2
PW2: tagged mode PW2: raw mode
Request VLAN: 200
– Upon reception of the services from NodeB 1 in PW1, NE1 encapsulates the
received Ethernet services to PW1 without performing any changes, since the PW
encapsulation mode is Raw and the service-delimiting tag is User. Upon reception
of the encapsulated Ethernet services over PW1, NE2 decapsulates the services and
transparently transmits the services to the RNC, since the PW encapsulation mode
is Raw and the service-delimiting tag is User.
– Upon reception of the services from NodeB 2 to PW2, NE1 encapsulates the
received Ethernet services in PW2 without performing any changes, since the PW
encapsulation mode is Raw and the service-delimiting tag is User. Upon reception
of the encapsulated Ethernet services over PW2, NE2 decapsulates the services,
replaces the P-TAG with a U-TAG (namely, changes the C-VLAN tag to 200), and
transmits the services to the RNC, since the PW encapsulation mode is Tag and the
service-delimiting tag is Service.
l In the downstream direction:
– Upon reception of the services from the RNC to PW1, NE2 encapsulates the
received Ethernet services in PW1 without performing any changes, since the PW
encapsulation mode is Raw and the service-delimiting tag is User. Upon reception
of the encapsulated Ethernet services over PW1, NE1 decapsulates the services and
transparently transmits the services to NodeB 1, since the PW encapsulation mode
is Raw and the service-delimiting tag is User.
– Upon reception of the services from the RNC to PW2, NE2 replaces the U-TAG of
the received Ethernet frames with a P-TAG (namely, changes the C-VLAN tag to
100) and encapsulates the services in PW2, since the PW encapsulation mode is
Tag, the service-delimiting tag is Service, and the request VLAN is 100. Upon
reception of the services over PW2, NE1 decapsulates the Ethernet services and
transparently transmits the services to NodeB 2, since the PW encapsulation mode
is Raw and the service-delimiting tag is User.
PORT 1
C-VLAN: 100
PSN C-VLAN: 100
NodeB 1 AC
AC PW1
PW2 AC
AC NE1 LSP RNC
NE2
C-VLAN: 200
C-VLAN: 100
NodeB 2
PW2:
PW1:
Service, Tag mode
User or Raw mode
Request VLAN ID: 100
The OptiX RTN 380A/380AX performs QoS for ETH PWE3 packets as follows.
l Ingress node
The PHB service class of an ETH PWE3 packet can be manually specified. When a
packet leaves an ingress node, the EXP value of the packet is determined according to
the mapping (between PHB service classes and EXP values) defined by the DiffServ
domain of the egress port.
l Transit node
When a packet enters a transit node, the PHB service class of the packet is determined
according to the mapping (between EXP values and PHB service classes) defined by the
DiffServ domain of the ingress port. When a packet leaves a transit node, the EXP value
of the packet is determined according to the mapping (between PHB service classes and
EXP values) defined by the DiffServ domain of the egress port.
NOTE
When an MPLS tunnel uses a manually specified EXP value, the EXP value of ETH PWE3 packets is fixed,
not affected by a DiffServ domain.
Service Models
Table 5-3 defines the PW-carried E-Line service models.
Model 3 PORT (source) UNI-NNI Layer 2 (source) Null, IEEE A UNI port
PW (sink) Layer 3 (sink) 802.1q or QinQ processes the
(source) received
- (sink) packets based
on its tag
attribute or
QinQ type field,
and then sends
the packets to
the NNI side for
transmission on
PWs.
AC PW2 AC
NE1 LSP NE2 RNC
Packet transmission
equipment
On the UNI side of NE1, service 1 is received by port 1 and service 2 is received by port 2.
On the NNI side of NE1, service 1 and service 2 are transmitted separately on two PWs.
Service 1 Service 1
AC PW2 AC
NE1 LSP NE2 RNC
Packet transmission
equipment
PSN
NodeB 1
AC AC
PW1
AC PW2 AC
NE1 LSP NE2 RNC
Service Model
Table 5-4 shows the PW-carried E-LAN service models.
Service Model Tag Attribute Learning Logical UNI Encapsulation Logical NNI
Mode Port Type Mode at a Port Type
UNI Port
NOTE
Port 1
PW1 User A2
NE 1
PSN
PW1 E-Line
Port 1
User A1
VSI NE 3
PW2
PSN
Port 1
PW2 User A3
E-Line
NE 1
PW1
VSI PW2 VLAN 200 Port 2
PSN
Port 1 User H2
VLAN 100
User G1 PW2
E-Line
Port 2 PW3
VLAN 200 NE 3
User H1
PW4 E-Line
PW3 Port 1
VSI PSN VLAN 100
User G3
PW4
VLAN 200
User H3
Port 2
E-Line
NE 2
NE 1
PW1(raw mode)
VSI VLAN 100 Port 2
PSN
Port 1 User H2
SVLAN 300
User G1
PW2(raw mode) PW2(tagged mode) E-Line
Request VLAN:400
PW4
VLAN 100
User H3
PW3(tagged mode) Port 2
Request VLAN:400
E-Line
NE 3
Service Model
Table 5-5 defines the PW-carried E-AGGR service models.
NOTE
a: Encapsulation Type must be set to the same value for all UNI ports in model 1.
As shown in Figure 5-22, service 1 is present between NodeB 1 and the RNC, service 2 is
present between NodeB 2 and the RNC, service 3 is present between NodeB 3 and the RNC,
and service 4 is present between NodeB 4 and the RNC. The four services need to be
transmitted over a PSN. Service 1 and service 2 are aggregated at NE1. Service 3 and service
4 are aggregated at NE2. PW1 carrying service 1 and service 2 and PW2 carrying service 3
and service 4 are aggregated at NE3.
Service 1
Port: 1
VLAN ID: 100 Service 1
NodeB 1
AC Port: 1
Service 2 VLAN ID: 100
Port: 2 PSN Service 2
VLAN ID: 200 AC Port: 1
PW1
VLAN ID: 200
NE1 LSP1
Service 3 AC
NodeB 2
Port: 1 PW2 NE3 Service 3 RNC
VLAN ID: 300 Port: 1
LSP2 VLAN ID: 300
AC
Service 4
NodeB 3
NE2 Port: 1
AC
VLAN ID: 400
Service 4
UNI NNI NNI UNI
Port: 2
NodeB 4 VLAN ID: 400
On the UNI side of NE1, service 1 is received by port 1 and service 2 is received by port 2.
On the NNI side of NE1, service 1 and service 2 are aggregated to the same PW for
transmission. In this manner, multipoint-to-point service aggregation is implemented.
NE2 processes service 3 and service 4 in the same manner as NE1 processes service 1 and
service 2.
On the NNI side of NE3, PW1 carrying service 1 and service 2 and PW2 carrying service 3
and service 4 are aggregated. On the UNI side of NE3, the four services are sent out through
port 1. In this manner, multipoint-to-point service aggregation is implemented.
As shown in Figure 5-23, service 1 and service 2 carry the same VLAN ID. PW1 carrying
service 1 and PW2 carrying service 2 are aggregated at NE3. For isolated service
transmission, the VLAN ID of service 1 is changed from 100 to 200 on NE1.
On the UNI side of NE1, service 1 is received by port 1. On the NNI side of NE1, service 1 is
aggregated to PW1 for transmission and VLAN ID swapping. After the VLAN ID swapping,
service 1 carries a VLAN ID different from that of service 2 and is therefore isolated from
service 2 during transmission.
PSN Service 1
Port: 1
AC
NodeB 1 PW1 VLAN ID: 200
NE1
LSP1
AC
NE3 Service 2 RNC
PW2
AC
Port: 1
LSP2 VLAN ID: 100
Service 2
NodeB 2 Port: 1 NE2
Service 2
VLAN ID: 100
PW: 2
UNI NNI VLAN ID: 100 NNI UNI
Service 1 Service 1
PW: 1 Port: 1
Service 1 VLAN ID: 100 VLAN ID: 100
Port: 1
VLAN ID: 100 PSN
AC
NodeB 1 PW1
NE1 Port 1
LSP1
AC
NE3 RNC
PW2
AC
LSP2
NNI UNI
Service 2
NodeB 2 NE2
Port: 1 Service 2 Service 2
VLAN ID: 100 PW: 2 Port: 1
VLAN ID: 100 VLAN ID: 200
UNI NNI
VLAN Forwarding
5.2.2 Principles
This section describes the principles of ETH PWE3.
In the scenario as shown in Figure 5-25, the PE devices emulate Ethernet services.
PSN
PW
CE1 AC AC
LSP PE2 CE2
(NodeB) PE1
(RNC)
Native Native
Ethernet ETH PWE3 Ethernet
service service
5.2.4 Specifications
This section describes the specifications for ETH PWE3.
Table 5-6 lists the specifications for ETH PWE3.
Item Specifications
PW APS Supported
VCCV Supported
NOTE
l The total number of VLANs used by UNI-carried E-Line, E-LAN services must not exceed 1024.
Feature Updates
Version Description
Self-limitations
PW whose encapsulation l The T-PID value for a request VLAN tag is set based on
mode is the tagged mode the specific NE requirements.
UNI port mode The port mode of a UNI carrying ETH PWE3 services must
be Layer 2.
ETH PWE3 services whose Only PORT+single VLAN<->PW E-Line services are
service-delimiting tag is supported.
Service
MAC address learning in S- The SVL mode must be used when a VPLS service is
aware mode connected to a UNI port whose port type is PORT.
Table 5-9 Dependencies and limitations between ETH PWE3 and other features
Feature Description
ETH OAM When Ethernet service OAM is used for ETH PWE3
packets, an MEP or MIP can be created only on a UNI but
not on an NNI.
MPLS-TP PW OAM l PWs that carry VPLS services do not support client
signal fail (CSF) of MPLS-TP PW OAM.
l If control words are not used for ETH PWE3
encapsulation, MPLS-TP PW OAM packets must carry
generic associated channel header labels (GALs).
5.2.8 FAQs
This section provides answers to the questions that are frequently raised when ETH PWE3 is
used.
Question: Does ETH PWE3 support PW ping/traceroute and VCCV?
Answer: Yes, ETH PWE3 supports VCCV, but not PW ping/traceroute.
Question: How to calculate the transmission efficiency of an ETH PWE3 service?
Answer: You can calculate the transmission efficiency of an ETH PWE3 service as follows:
Transmission efficiency = Ethernet frame length/(Ethernet frame length + PWE3 overhead
length + Ethernet Layer 2 overhead length)
l Ethernet frame length
– Untagged Ethernet frame length = 18 + Ethernet payload length
– Tagged Ethernet frame length = 22 + Ethernet payload length
– QinQ Ethernet frame length = 26 + Ethernet payload length
l PWE3 overhead length = MPLS label length + PW label length + CW length
– An MPLS label, PW label, and CW are all four bytes.
– If ETH PWE3 uses control words, the overhead length is 12 bytes in PWE3
packets.
– If ETH PWE3 does not use control words, the overhead length is 8 bytes in PWE3
packets.
l Ethernet Layer 2 overhead length = Ethernet frame header length + FCS length
– An untagged Ethernet frame header is 14 bytes.
– A tagged Ethernet frame header is 18 bytes.
– An FCS is 4 bytes.
– By default, an Ethernet packet carrying the MPLS packet is tagged. Therefore, the
Ethernet Layer 2 overhead is 22 bytes.
By default, the transmission efficiency of ETH PWE3 services is:
l Ethernet frame length divided by the sum of Ethernet frame length and 34, if ETH PWE3
uses control words
l Ethernet frame length divided by the sum of Ethernet frame length and 30, if ETH PWE3
does not use control words
Assuming that a 64-byte Ethernet service is transmitted in ETH PWE3 mode, the payload
transmission efficiency is 64/(64 + 34) = 65.3%.
NOTE
l The previous formula computes the payload transmission efficiency, without the consideration of the
20-byte interframe gap and preamble. These 20 bytes are omitted in ETH PWE3.
l When ETH PWE3 services are transmitted over radio links or Ethernet links, the ETH PWE3 service
transmission efficiency pertains to the efficiency of physical links transmitting Ethernet frames.
5.3 PW APS
PW APS protects services on PWs based on the APS protocol (APS is the abbreviated form of
automatic protection switching). If the working PW becomes faulty, PW APS switches
services to a preconfigured protection PW.
PW APS supported by OptiX RTN 380A/380AX has the following features:
l PW APS provides end-to-end protection for services on PWs.
l The working PW and protection PW are carried in different tunnels but have the same
local and remote provider edges (PEs).
l PW APS uses or MultiProtocol Label Switching Transport Profile (MPLS-TP) PW OAM
to detect faults in PWs, and PEs exchange APS protocol packets to implement protection
switching.
l PW APS is supported in dual-ended switching mode.
5.3.1 Introduction
This section describes the basic information of PW APS.
Working PW
PE1 PE4
Protection PW
PE3
Protect switching
PE2
Working PW
PE1 PE4
Protection PW
PE3
Service
In actual application environments, OptiX RTN 380A/380AX (PE1 in Figure 5-27) can work
with multi-chassis pseudo wire automatic protection switching (MC-PW APS) configured on
other equipment to implement PW APS. PE2 and PE3 are the packet devices that support
MC-PW APS, and communicate with each other through the dual node interconnection PW
(DNI-PW). PE1 considers PE2 and PE3 as packet devices.
MC-PW APS
PW APS PE2
Working PW
DNI-PW
PE1
Protection PW PE3
NOTE
Protection Mechanisms
Protection mechanisms include 1+1 protection and 1:1 protection.
l 1+1 protection
Normally, the transmit end transmits services to the working PW and protection PW, and
the receive end receives services from the working PW. If the working PW becomes
faulty, the receive end receives services from the protection PW.
l 1:1 protection
Normally, services are transmitted over the working PW, and the protection PW is idle. If
the working PW becomes faulty, services are transmitted over the protection PW.
NOTE
OptiX RTN 380A/380AX supports only 1:1 protection.
Switching Modes
Switching modes include single-ended switching and dual-ended switching.
l Single-ended switching
Switching occurs only at one end, with the state of the other end remains unchanged.
l Dual-ended switching
Switching occurs at both ends at the same time.
NOTE
PW APS supports dual-ended switching, and PW FPS supports single-ended switching.
Reversion Modes
Reversion modes include revertive mode and non-revertive mode.
l Revertive mode
Services are switched back to the working PW after the working PW recovers and the
specified wait to restore (WTR) time elapses. To prevent frequent switchovers caused by
the unstable status of the working PW, a WTR time of 5-12 minutes is recommended.
l Non-revertive mode
Services are not automatically switched back to the working PW even after the working
PW recovers. Services will not be switched back unless the protection PW fails or an
external command triggers protection switching.
NOTE
If two switching conditions exist at the same time, the higher-priority switching condition takes
precedence.
Signal fail for The SF-P condition indicates that the protection PW fails.
protection (SF- Services carried by the protection PW are automatically
P) condition switched to the working PW if the protection PW fails.
(automatic
switching)
NOTE
An optional condition can trigger PW APS SF switching only after it is selected. By default, the
MPLS_PW_BDI alarm is not a PW APS SF switching condition.
In actual application, the OptiX RTN 380A/380AX needs to support a large number of PW
APS protection groups, but may encounter the following problems:
l If each PW APS protection group starts a state machine, the resources and capability of
the system may fail to support all the PW APS protection groups.
l When a PW is faulty, the other PWs carried in the same LSP may be faulty. Then,
switching occurs on the PWs one after another, resulting in a long switching time in
total.
PW APS binding allows multiple PW pairs to share one APS state machine, so that the APS
state machine can process the protection switching for multiple PW pairs. All the PW pairs
that are bound to one PW APS protection group are called slave protection pairs.
l The slave protection pairs share one state machine with the PW APS protection group.
Therefore, less system resources are consumed.
l When the working PW in the PW APS protection group is faulty, protection switching
occurs on the PW APS protection group as well as on all its slave protection pairs. In this
manner, switching efficiency is improved.
NOTE
When the working PW in a slave protection pair is faulty, protection switching does not occur.
Figure 5-28 considers two PWs as an example to describe how PW APS binding is applied.
Wherein, the working PW1 and protection PW1 form a PW APS protection group, and the
working PW2 and protection PW2 form a slave protection pair of the protection group. When
the working PW1 is faulty, the services carried by the working PW1 and PW2 are switched to
their protection PWs at the same time.
PW APS
LSP PW1
king
Wor
PW2
king
Wor
DNI-PW
Prote
ction
PW1
Prote
ction
Protect switching
MC-PW APS
PW APS W1
P
LSP king
Wor
P W2
king
Wor
DNI-PW
Prote
ction
Prote
ction
When the primary PW becomes faulty, the user-to-network interface (UNI) bound to the VPN
instance to which the PW interface is mounted also becomes faulty and ARP entries on the
UNI are cleared. If the ARP entry dually-transmitting and buffering function is disabled, the
UNI sends a request for restoring ARP entries upon the recovery of the primary PW. This
process takes several seconds, prolonging the PW switchback duration. If the ARP entry
dually-transmitting and buffering function is enabled, PE 1 proactively transmits ARP entries
to the UNIs on both the primary and secondary PWs upon the recovery of the primary PW,
and the ARP entries are buffered on the UNIs. Therefore, no ARP resolution is required,
reducing the PW switchback duration.
PE 1 proactively transmits ARP
entries to the UNIs on both the
primary and secondary PWs,
and the ARP entries are
buffered on the UNIs.
s
entrie
ARP
PE 1
L3VPN network
ARP
entrie
s
NOTE
In the receive direction, PE 1 receives all packets from the primary and secondary PWs and does not
needs to be enabled with the ARP entry dually-transmitting and buffering function.
Upon detecting a fault, PW APS in dual-ended switching mode switches services to the
forward and reverse protection PWs.
Before Switching
l The local and remote PEs exchange APS protocol packets over the protection PW,
thereby allowing the PEs to learn each other's status. If the working PW becomes faulty,
the local and remote PEs can perform the protection switching, switching hold-off, and
wait-to-restore (WTR) functions. Before switching, the request state contained in an APS
protocol packet is No Request.
l MPLS-TP PW OAM is used to check the connectivity of all the PWs.
During Switching
Figure 5-29 shows the implementation of dual-ended switching caused by a fault in the
forward working PW.
Reverse Reverse
Switching
working PW working PW
Reverse Reverse
protection PW protection PW
l "Bridging" means that equipment transmits services to the protection PW instead of the working
PW.
l "Switching" means that equipment receives services from the protection PW instead of the working
PW.
2. On the receipt of the APS protocol packet carrying a switching request, the local PE also
performs switching and bridging:
– The local PE pushes the forward protection PW label to the service packets so the
services can be bridged to the forward protection PW.
– The local PE receives services from the reverse protection PW instead of the
reverse working PW.
3. Services are transmitted over the forward and reverse protection PWs.
After Switching
If PW APS 1:1 dual-ended switching is in revertive mode, services are switched back to the
forward and reverse working PWs after the working PW recovers and continues to operate
normally for the WTR time.
5.3.4 Specifications
This section lists the PW APS specifications that this product supports (APS is the
abbreviated form of automatic protection switching).
Table 5-12 lists the PW APS specifications that this product supports.
Feature Updates
Version Description
Self-limitations
Table 5-14 Dependencies and limitations between PW APS and other features
Feature Description
5.3.8 FAQs
This section answers FAQs about PW APS/PW FPS (APS is the abbreviated form of
automatic protection switching, and FPS is the abbreviated form of fast protection switching).
6 Clock Features
This section describes the clock basics related to the OptiX RTN 380A/380AX and the clock
features and clock synchronization solutions supported by the OptiX RTN 380A/380AX.
6.1.1 Introduction
This section introduces the physical-layer clock synchronization solution.
Clock Synchronization
In a broad sense, clock synchronization includes frequency synchronization and time
synchronization. Generally, clock synchronization refers to frequency synchronization.
Frequency synchronization means that the frequencies or phases of signals maintain a certain
and strict relationship. The valid instants of these signals appear at the same average rate so
that all the equipment on the communications network can operate at the same rate. That is,
the phase difference between signals is constant.
l For mobile communication networks and other service networks, not only signal
transmission but also communication services require clock synchronization. If clock
synchronization is not implemented, exceptions will occur, such as call drops and inter-
cell handover failures.
Digital signals transmitted on lines or links are coded or scrambled to reduce consecutive '0's
or '1's. Therefore, the code stream carries plentiful clock information. The clock information
can be extracted by applying phase lock and filter technologies and used for synchronization
references.
Microwave links, synchronous Ethernet links, and SDH lines can all provide timing
information. For example, gigabit Ethernet uses 8B/10B encoding signals. Even all '0's or all
'1's original data can be converted into line encoding signals with balanced "0"s and "1"s.
Clock Source
A clock source is a signal source carrying timing reference information. To achieve clock
synchronization, an NE keeps its local clock in phase with the timing information by using the
phase-locked loop (PLL).
RTN 380A/380AX supports the following clock sources:
l Microwave clock source: Timing information is extracted from signal streams on radio
links.
l Ethernet clock source: Timing information is extracted from Ethernet signal streams.
Multiple clock sources can be configured for an NE. Clock source protection is implemented
based on the priorities configured in the clock source priority list. When the clock source of a
higher priority fails, the clock source of a lower priority is used.
OptiX RTN 380A/380AX supports the ITU-T G.8264-compliant clock source group feature.
When there is more than one clock source between two NEs in a LAG, EPLA, or 1+1
protection group, you can configure the sources in a clock source group to prevent
interlocking between the clock sources.
6.1.3 Specifications
This section lists the physical layer clock synchronization specifications that OptiX RTN
380A/380AX supports.
Table 6-1 Physical layer clock synchronization specifications that OptiX RTN 380A/380AX
supports
Item Specification
Item Specification
Self-limitations
None
Table 6-2 Dependencies and Limitations Between physical layer clock synchronization and
Other Features
Feature Description
sources for the other NEs. Clock sources in the shorter-path have higher priorities than
those in the longer-path.
l When the extended SSM protocol is used, allocate IDs to clock sources. Follow these
guidelines when you allocate clock source IDs:
– When the extended SSM protocol is used, the clock ID of an external clock source
cannot be automatically extracted. Therefore, allocate clock IDs to all external
clock sources.
– At all the NEs that are connected to external clock sources, allocate clock IDs to the
internal clock sources.
– At all the intersecting nodes of a ring/chain and a ring, allocate clock IDs to the
internal clock sources.
– At all the intersecting nodes of a ring/chain and a ring, allocate clock IDs to the line
clock sources that are transmitted to the ring.
– Do not allocate clock IDs to the clock sources other than those of the preceding four
types. This indicates that their clock IDs are 0 by default.
– Clock IDs do not determine clock source priorities.
6.2.1 Introduction
This section describes the applications and principles of the IEEE 1588v2 feature.
IEEE 1588v2
IEEE 1588 version 2 (IEEE 1588v2) is a precision clock synchronization protocol for
measurement and control systems. IEEE 1588, also called the Precision Time Protocol (PTP),
can achieve a time synchronization accuracy within the submicrosecond range.
l Mobile networks, such as CDMA and TD-LTE networks, require high-precision time
synchronization. Conventionally, the networks obtain time signals from the GPS.
However, the cost for deploying a large number of GPS devices is high, and GPS
satellite signals may occasionally be blocked.
l The IEEE 1588v2 function can be enabled on transmission equipment to achieve
networkwide time synchronization that is accurate to within the submicrosecond range.
Therefore, this function can replace the GPS to provide time signals to base stations.
t2 = t1 + offset + delay
t4 = t3 - offset + delay
The time when a message is transmitted or received is called a timestamp. Sync and
Delay_Req messages carry transmission timestamps.
NOTE
The preceding describes the general time offset and delay measurement principles. IEEE 1588v2 has the
following characteristics:
l PTP devices generate timestamps through hardware. Therefore, time synchronization that is accurate to
within the submicrosecond range can be achieved.
l The measurement of the delay of the entire path between the master and slave clock devices is called E2E
delay measurement or Delay mode.
l The measurement of the delay of a link between two adjacent PTP devices is called P2P delay
measurement or PDelay mode.
l TC nodes only forward messages and record the message residence time in the forwarded messages.
Residence time refers to the time when a message is processed and forwarded by a device.
l If the delays in the transmit and receive directions of a link are different, compensation needs to be
performed for the delay difference.
l When an RTN NE works in TC+BC node, it allows IEEE 1588v2 time signals to be
transparently transmitted between Ethernet ports or over microwave links.
For the transparent transmission over a microwave link hop, the residence time of an
IEEE 1588v2 message is the total time for traversing the entire microwave link hop.
In this case, the two RTN NEs (one at either end of the microwave link hop) achieve
time synchronization between them through the microwave link and form a time
synchronization island. The two RTN NEs can be considered as a large TC node.
A network that supports transparent time transmission has the following characteristics:
l All RTN NEs along the transparent time transmission path work in TC+BC mode.
l RTN NEs transparently transmit time signals to downstream NEs through Ethernet ports.
l RTN NEs can transparently transmit multiple channels of time signals. When RTN NEs
transparently transmit IEEE 1588v2 messages, Ethernet services need to be created to
transmit them.
6.2.3 Specifications
This section provides the IEEE 1588v2 specifications that OptiX RTN 380A/380AX supports.
Table 6-3 IEEE 1588v2 specifications that OptiX RTN 380A/380AX supports
Item Specifications
Clock model l OC
l BC
l TC
l TC+BC
Item Specifications
Self-limitations
Precision Time Protocol (PTP) nodes If IEEE 1588v2 is used for time
synchronization among Precision Time
Protocol (PTP) nodes, frequency
synchronization must be implemented
among these PTP nodes.
Delay measurement method for microwave The P2P delay measurement method is
ports always used for microwave ports.
Table 6-5 Dependencies and Limitations Between IEEE 1588v2 and Other Features
Feature Description
Link aggregation group (LAG) The precautions for applying IEEE 1588v2
transparent time transmission are as follows:
l If transparent clock (TC) ports are
interconnected, it is recommended that
the ports in a LAG should not be used as
TC ports. If a port in a LAG must be
used as a TC port, all ports in the LAG
must be configured as TC ports. If this
LAG works in load-sharing mode,
ensure that the physical links to all
member ports in the LAG have the same
length, or that delay is compensated for
so that all the physical links appear to be
of the same length.
l If TC ports are interconnected with
boundary clock (BC) ports, it is
recommended that the ports in a LAG
should not be used as TC ports. If a port
in a LAG must be used as a TC port, the
LAG must work in non-load sharing
mode.
central station through a synchronous Ethernet port and sends time information to the
central station using the IEEE 1588v2 protocol.
6.3.1 Introduction
This section describes the application and basic principles of ITU G.8275.1.
Definition
ITU G.8275.1 is a carrier-network PTP profile customized based on IEEE 1588v2, and is
typically used in mobile backhaul networking. Huawei made some modifications and
restrictions to ITU G.8275.1 based on IEEE 1588v2. ITU G.8275.1 has the following major
differences from IEEE 1588v2:
l ITU G.8275.1 supports only networkwide time synchronization but does not support
time transparent transmission.
l ITU G.8275.1 requires physical-layer clock synchronization.
l ITU G.8275.1 has its clock source selection algorithm and clock priorities customized
and optimized based on IEEE 1588v2.
NOTE
It is advisable to learn the IEEE 1588v2 feature before the ITU G.8275.1 feature.
Purpose
On transmission networks, ITU G.8275.1 provides µs-level networkwide time
synchronization. Therefore, ITU G.8275.1 can function as an alternative to the global
positioning system (GPS) or other complex timing systems, providing high-precision time for
NodeBs or eNodeBs. Figure 6-14 illustrates an application example wherein the ITU G.
8275.1 synchronizes the time to NodeBs distributed in a CDMA2000 or TD-SCDMA
communication system.
RNC BITS
Time
PTP node
synchronization
The following standards and protocols are associated with ITU-T G.8275.1:
6.3.3 Specifications
This section provides the ITU-T G.8275.1 specifications that OptiX RTN 380A/380AX
supports.
Table 6-6 ITU-T G.8275.1 specifications that OptiX RTN 380A/380AX supports
Item Specifications
Item Specifications
Self-limitations
Item Description
Precision Time Protocol (PTP) nodes If ITU-T G.8275.1 is used for time
synchronization among Precision Time
Protocol (PTP) nodes, frequency
synchronization must be implemented
among these PTP nodes.
Delay measurement method for microwave The P2P delay measurement method is
ports always used for microwave ports.
Table 6-8 Dependencies and Limitations Between ITU-T G.8275.1 and Other Features
Feature Description
7 Maintenance Features
This chapter describes various maintenance features supported by OptiX RTN 380A/380AX.
7.1.1 Introduction
This section defines Two-Way Active Measurement Protocol (TWAMP) Light and describes
its purpose.
Definition
TWAMP Light is a light version of TWAMP. Unlike TWAMP, TWAMP Light measures the
round-trip performance of an IP network by using simplified control protocol to establish test
sessions.
On conventional IP radio access networks (IP RANs), carriers desperately need a universal
tool that efficiently collects statistics about IP network performance for operation,
administration, and maintenance (OAM). Up to date, Network Quality Analysis (NQA) and
IP Flow Performance Measurement (IP FPM) are usually used to collect statistics about IP
network performance. However, NQA does not allow for interconnections between devices
from different vendors, and IP FPM has high requirements on network devices and applies to
limited scenarios. Both NQA and IP FPM are difficult to deploy. To resolve this issue, the
Internet Engineering Task Force IP performance monitoring (IETF IPPM) group defines a set
of protocols, including TWAMP.
Purpose
TWAMP Light is an IP link detection technology that is easy to deploy and use. It helps users
monitor network quality (latency, jitter, and packet loss rate). Compared with other
measurement technologies, TWAMP Light has the following advantages:
l Unlike network quality analysis (NQA), TWAMP has a unified measurement model and
packet format, facilitating deployment.
l Unlike Multiprotocol Label Switching Transport Profile (MPLS-TP) Operation,
Administration and Maintenance (OAM), TWAMP can be deployed on IP, MPLS.
l Unlike IP Flow Performance Management (FPM), TWAMP boasts higher availability
and easier deployment, and requires no clock synchronization.
l Enables carriers to rapidly and flexibly collect performance statistics for the entire IP
network if the NMS cannot collect the statistics.
l Collects performance statistics if the IP network does not support clock synchronization.
7.1.2 Principles
Two-Way Active Measurement Protocol (TWAMP) Light is implemented by packet exchange
between the controller and the responder.
l On-demand measurement works in a specified period after being manually started. It can
be performed once or periodically in the specified period.
l Proactive measurement works continuously to collect statistics after being started.
A TWAMP Light service must be established before TWAMP Light is used to collect
performance statistics.
7.1.4 Specifications
This section lists the TWAMP Light specifications that the OptiX RTN 380A/380AX
supports.
Number of reflectors 8
Number of large-traffic 1
reflector instances
Feature Updates
Version Description for RTN 380A Description for RTN 380AX
Self-limitations
Item Description
QoS coupling Port shaping does not take effect for reflection ports on the
responder.
Table 7-3 Dependencies and limitations between TWAMP Light and other features
Feature Description
Link aggregation group LAG can be configured on UNI ports on the responder. If
(LAG) LAG is configured on ports on the network side, the
performance of only one physical link can be measured.
7.1.8 FAQs
This section answers FAQs about Two-Way Active Measurement Protocol (TWAMP) Light.
Q: What are the rules for setting IP routes for reflection ports?
A: IP routes are configurable only when reflection ports are on the same network segment as
base stations.