MPLS Design
MPLS Design
Date: From:
2013-4-10 Ahmet Akyamac 1 Benjamin Tang 1 Gary Atkinson 1 Rafal Szmidt 2 Ramesh Nagarajan 1 1 Bell Laboratories 2 LWS Europe +1 (732) 949-5413 +1 (732) 949-6477 +1 (732) 949-2920 +48 22 692 38 86 +1 (732) 949-2761 [email protected] [email protected] [email protected] [email protected] [email protected]
1. Introduction
This document presents methods and procedures for performing DiffServ traffic engineering based (DS-TE) MPLS network design and optimization using the Opnet SP Guru Tool. The discussion here assumes that a network topology (nodes and link locations) is in place and does not consider Greenfield design. As of the writing of this document, the latest version of SP Guru is 11.5 and the methods and procedures presented here assume this version of the tool. As later versions become available, the user should consult the product documentation for any possible changes. The following figure shows a flow diagram of the general MPLS network design and optimization methodology used in this document.
Input Multi-Class traffic and/or LSP information, (VoIP, VPN, 3G, IA, etc.), protection options (Path, FRR, etc.)
Reports
Reports
Typical inputs to the MPLS network design procedure include topology information (such as node locations, capacity/configurations; link types, connections, class type partitioning and subscription constraints) and traffic information (multi-class traffic and/or LSP information, protection - path based, FRR etc.). The traffic engineering constraints (hop, delay etc.), and design objectives (such as minimizing bandwidth, minimizing maximum subscription, etc.) are further inputs to the design process. Subsequent to network design, routing and performance analysis can be performed to estimate network performance in terms of capacity utilization, traffic quality of service measures such as delay, loss, etc. As a final step, the design output information is collected through a series of reports, which can be analyzed to compare design results to design objectives and specifications. This could result in recommended changes to the network, which would then be used to perform further network studies as part of a closed loop process (shown by the dashed lines in Figure 1). In the remainder of this document, we will analyze each of the above blocks in detail and discuss the specific implementation methods and procedures in SP Guru. Each of the steps will be presented through specific examples, which may involve the use of certain files. Some of these files will be made available at a later time as a package to be downloaded.
2. Launching SP Guru
As of the writing of this document, the license server for SP Guru is located on usil100014svr23.ih.lucent.com. When SP Guru is first launched, it will attempt to obtain an available license from this server. If successful, a splash screen as in Figure 2 is displayed.
Figure 2: Opnet SP Guru splash screen To create a new project, use File->New and select type Project. In SP Guru, projects can contain multiple scenarios. The project name is the same as the file name. Each scenario could represent a different phase in the network design task, or could represent different stages of the network etc.
Many actions in SP Guru can be time consuming (depending on the size of the network), and most actions are irreversible. Thus, it is recommended practice to split different phases of the design project into scenarios and to save frequently. The next window will ask for a project and scenario name. At this point, enter a meaningful scenario name (e.g., manual_topology_input), but do not change the project name. Also, unselect the use of the startup wizard for this project. This action generates the scenario manual_topology_input and shows the network screen, which currently contains the world map as in Figure 3. Next, save the project in a preferred directory using File>Save As. Note that to access this project at a later time, the model directory the project was saved into has to be added using File->Model Files->Add Model Directory from the splash screen.
import. These actions will add the node and link objects to the network model. In the following, we consider example scenarios for each of these actions.
3.1
Objects can be added to the network model using the object palette, accessed using Topology->Open Object Palette or by using the icon with the paint brushes on the top left hand of the screen, as highlighted in Figure 3. The object palette tree can be used to access node and link objects (as well as other objects) in numerous ways, such as by device type, name, etc. Figure 4 shows the selection of a Cisco 12410 router from Cisco-Node Models under Shared Object Palettes. These palettes are available to all projects. Custom palettes can also be defined and saved.
Objects can be selected by using the icon and placing the object on a selected position on the map. Links can be placed by selecting a link object from the palette and clicking on two end nodes in the network view. Figure 5 shows two nodes placed at Philadelphia and Washington connected by a SONET OC-48 link (Note that multiple zooms in an area will eventually enable viewing of major city names around that area). Also shown are the link attributes.
Figure 5: GUI view after manually adding two nodes and a link
For all objects, the associated attributes can be accessed by pointing the cursor on the object, right clicking and selecting Edit Attributes. This will open a window where attributes for the object can be accessed changed. Clicking on Advanced will bring up an advanced set of attributes, mainly related to the visualization and placement attributes for the objects (such as size, color, etc.). The manual topology input method is practical mainly for small to medium size network models. For bigger models, configuration file or text file import is the preferred method to enter network models. However, the manual entry method can be used to make incremental changes to an existing network model such as adding new nodes, links, etc.
3.2
Network models can be imported to SP Guru using Cisco or Juniper configuration/configlet files. The config file import tab can be accessed using Topology->Import Topology->From Device Configurations. This will open up the tab shown in Figure 6, which contains options such as replacing the model, merging with the model, updating device configurations etc.
The directories are specified for each vendor (Cisco or Juniper), or a combination of the two. Additionally, config files arranged in directories can also be imported. In the following example, we import a set of Cisco config files from the directory MH_Lab_Cisco_Configlet_Files in the file package. These config files were collected from a Lucent test lab. Note that during the import process, there are two pairs of interfaces with unresolved data rates on their incident links. We can set these data rates to T1. The resulting network model is shown in Figure 7. Note the different links (serial links in black and Ethernet links in brown) Cisco devices (shown in blue), Ethernet hubs (shown in gray) and edge LAN models. This particular import also included some MPLS LSPs (shown in green). Configuration file import is a very useful and powerful method for importing entire networks into SP Guru as many of the detailed network parameters are immediately populated (as opposed to being modified manually, which is very time consuming). However, from time to time, configlet files may have some errors or information conflicts. Under these types of circumstances, SP Guru usually
prompts the user to manually enter any additional information that may be required to complete a successful network import.
3.3
Network models can also be entered using text file import (text file export can be used to output the network model). The text file import/export functionality was built into SP Guru 11.5 as a result of a customization requested by Lucent as part of the multi-class MPLS DS-TE capabilities. This functionality is specifically designed for MPLS networks and thus the imported objects are nodes and links that are part of the MPLS model and the imported options also pertain to MPLS. This functionality does not represent a general text network import/export feature. Furthermore, this text file import/export functionality also allows SP Guru to be used in conjunction with other internal
tools while performing studies since the format defined for the import/export file is a simple fieldbased text format. There are three types of text import/export files; these files define the nodes, links and LSPs. Text file import can be accessed from Topology->Import Topology->From DS-TE Text Files. The import tab allows the user to specify which of the files are to be imported and the location of the files. An example import tab is shown in Figure 8. Note that we have unselected the LSP tab; more information on LSP import is included later in this document. Note that the node, link and LSP information can be exported to text files using Topology->Export Topology->To DS-TE Text Files.
The nodes import file needs to specify the Router Name, Router Model (this has to match a node model name in the SP Guru router library), longitude and latitude and bandwidth constraint model. The bandwidth constraint model is the MPLS subscription model used on this node. For Cisco routers, enter the text Russian Dolls Model (RDM) and for Juniper routers, use the text Maximum Allocation Model or the Extended MAM model (these models will be discussed later in this document). Figure 9 shows a part of a node import file that will be used later in this document. Comments can be entered by proceeding with a #, and it is generally good practice to include meaningful field names. The link import file should specify the link name (optional, the names are automatically assigned if they are not specified here), link model (this has to match a link model name in the SP Guru link library), source router, destination router, data rate (normally blank, this is used for link models which allow user-specified data rates), direction (a good practice is to always use BOTH), RSVP bandwidth percentage (this is the percentage of bandwidth that can be used for MPLS subscription), number of class types the bandwidth is partitioned into (for single class, this is 1), and a list of class types and corresponding bandwidth partitions. The partitioning is based on the RSVP reservable
bandwidth. Thus, if the RSVP bandwidth percentage of a link is 80%, then the class bandwidth partitioning will also be based on this 80% bandwidth (this bandwidth model, and the application of RDM or MAM will be discussed later in this document). Figure 10 shows a part of a link import file that will be used later in this document.
Figure 10: Part of a link import file showing the required fields
the portion of the link bandwidth that is subscribed by the LSPs. If traffic demand information is also included, then the traffic can be routed across the MPLS network on the LSPs. This would result in traffic utilization across the links, which is a ratio of the bandwidth of the traffic traversing the link to the links bandwidth capacity. Note that subscription and utilization are not necessarily equivalent, as the traffic being sent on an LSP can be controlled at the network ingress/entry point through policing mechanisms. Furthermore, there may be other traffic that take normal IP routing paths and do not traverse the LSPs. The utilization on a link may be higher or lower than the LSP subscription.
4.1
Traffic information can be entered either manually through the GUI or by using spreadsheets. If traffic information has been collected using Cisco Netflow or cflow, then this traffic information can also be entered using Netflow or cflow data. In SP Guru, most simulation and analyses are based on defined time periods. A very common time period used is an hour, and would typically correspond to a daily busy hour for the network. Thus, a traffic demand profile would show the change in traffic demand bit rates for the busy hour. The start and stop time for these analyses can be manually modified, however it is always good practice to allow automatic setting of the analyzed hour throughout the tool. To import traffic using Netflow or Cflowd data, use Traffic->Import Traffic Flows->From Cisco Netflow or Traffic->Import Traffic Flows->From Cflowd. SP Guru has the ability to import traffic from other traffic collectors, as can be seen from the additional menu items here. Figure 11 shows an example network for which traffic was imported using Cflowd data. In this figure, the traffic flows are shown in blue. One of the traffic flows is highlighted; the bit rate profile is retrieved directly from the Cflowd files.
Figure 11: Network view for an example network after flow import from Cflowd files
10
In the GUI, it is possible to select a group of nodes and create a full mesh or single directional demands between them. Apart from IP uni-cast and multi-cast flows, it is also possible to generate traffic profiles for VoIP and MPLS VPN traffic. To access all these options, use Traffic->Create Traffic Flows. For data flows, the user can set the bit and packet rates, as well as the class of service fields (DSCP code point) and protocols. If specific ingress/egress interfaces are not specified, then the default loopback interfaces will be used. The user can also specify the flow duration, but this can be defaulted to the analysis timeframe of the network. The protocol field is used to determine overhead bytes and is thus added to the data rate. For VoIP flows, the user can specify the call volume (in Erlangs), average call duration, flow duration, codec (G.711, G.729 etc.), class of service and protocols to determine header overhead. This flow creator will them create voice packet profiles based on the selected options. Figure 12 shows some sample VoIP options and the network of Figure 11 with VoIP flows added (shown in orange).
Figure 12: VoIP flow options and network view after VoIP flows have been generated
Traffic demands can also be imported from text-based spreadsheets. These files specify much of the information available from the GUI, such as source and destination information, protocols, class of service, name, packet size and bit rate profiles in the defined time units for the time duration the flows are defined for. To import traffic flows from spreadsheets, use Traffic->Import Traffic Flows>From Spreadsheet. The traffic flows can also be exported to spreadsheets using Traffic->Export Traffic Flows->To Spreadsheet. Thus, a traffic flow profile can be saved as a text-based spreadsheet to be used in other projects. In Figure 13, we show a sample text-based spreadsheet file (this text file corresponds to the data traffic from Figure 11) that highlights the required fields.
11
4.2
As in the case of the node data input, LSP data can be input either using configuration files from Cisco and Juniper routers, manually using the SP Guru GUI, or through text file import. Note that MPLS-capable nodes are also referred to as label switched routers (LSR). LSPs are path entities and are uni-directional. In Section 3.2, we showed an example where the configuration file import included LSP information. To manually enter LSPs, select the MPLS_E-LSP_DYNAMIC model from the MPLS palette, which is found under Paths in the main object palette. Once the LSP model is selected, create LSPs one by one by selecting a source LSR, intermediate LSRs (if necessary, by right clicking on each intermediate LSR to add it to the path), and a destination LSR (right click and cancel add action when done). Note that this creates a single LSP from source to destination. To create a pair of LSPs, a second LSP must be created in the opposite direction. Once all LSPs are entered, it is necessary to commit LSP information by selecting Protocols->MPLS->Update LSP Details. Additionally, it is necessary to modify the LSP parameters to configure traffic engineering parameters such as setup and holding priorities, class-based bandwidth requirements, etc. These attributes can be edited by right clicking on the LSP, and selecting Edit Attributes. In Figure 14, we show how to access the attributes for a newly created LSP. However, manual editing of LSPs is mainly practical for smaller networks or for networks for which incremental additions of LSPs are being made. For moderate to large size networks, manual entry of LSP information can become quite tedious. For these situations,
12
it is much more practical to use text file import such as described in Section 3.3. We discuss this method next.
The LSP import file should specify the LSP name (optional, the names are automatically assigned if they are not specified here), LSP model (this has to match an LSP model name in the SP Guru LSP library, a typical model to use in most studies is MPLS_E-LSP_DYNAMIC), source LSR, destination LSR, hop limit constraint (optional), propagation delay constraint (optional), setup priority, holding priority, class type count (1 for single class LSPs, number of classes for multi-class LSPs; more one this aspect in the next section); and a list of class types and corresponding bandwidth requirements. Shows a part of an LSP import file. Prior to importing LSPs, the LSP import check box in Figure 8 should be checked. Along with the node and link import files, the LSP import file enables the entire MPLS network topology to be imported into projects, exchanged between projects and input/output to different applications outside of SP Guru. These import files, in addition to the traffic flow import files enable a complete network model to be prepared for optimization, design and analysis with little or no
13
manual processing. In the next section, we will discuss the preparation of the MPLS network model through different architecture choices prior to optimization and design.
14
cannot exceed this subscription level (Note that for LDP emulation, this can be set to a very large number like 10000%). Link bandwidth partitioning: Depending on the bandwidth model, the bandwidth partitioning on the link relative to the different classes. The partitioning is performed on the link reservable bandwidth. LSP routing constraints: The hop and propagation delay constraints to be used for LSP routing. Single class LSPs (which specify bandwidth for a single class of traffic) or multi-class LSPs (which specify bandwidth for multiple classes of traffic simultaneously) LSP setup and holding priorities: There are between 0 and 7, 0 being the highest priority. Typically, the holding priorities are set to 1 and the setup priorities are set to a number between 1 and 7. The highest priority level of 0 is usually reserved for emergency purposes. LSP protection options: End-to-end protection or fast re-route (FRR). The FRR implementation in SP Guru Release 11.5 requires much manual input and is not currently viewed as practical for many of our network studies and it is not included as part of the optimization, although it is still being evaluated. Bell Labs internal tools will be integrated to the MPLS M&P in FY06 for a more preferred and flexible FRR optimization. For the time being, end-to-end path protection is used as the available protection option for optimization. Optimization objectives: The optimization objectives include minimizing total subscribed bandwidth; minimizing maximum subscribed bandwidth; maximizing minimum residual bandwidth (bandwidth remaining for subscription after the LSPs are routed).
5.1
SP Guru supports the RDM model for Cisco routers and the MAM model for Juniper routers. For Cisco routers, the RDM model supports up to two classes of LSPs: ct0 and ct1. Two bandwidth pools are defined: the global pool and the sub-pool. The ct1 class LSPs can subscribe bandwidth in the sub-pool and the ct0 LSPs can subscribe bandwidth in both the global pool and sub-pool, if bandwidth is available in the sub-pool. LSPs of class ct1 are assigned higher setup priorities than LSPs of the ct0 class. Thus, the sub-pool is always available to the ct1 class and to the ct0 class as long as there is residual bandwidth available from the ct1 class. Let the global pool bandwidths be referred to as B0 and B1 and let the class type subscriptions be referred to as R0 and R1. Then, B0+B1 = RB; R1 B1; R0 B0+B1; R0+R1 RB. In a mixed VoIP/data network, VoIP LSPs would be of the ct1 class and data LSPs would be of the ct0 class. In Figure 16, we illustrate the concepts of link reservable bandwidth and global pool and sub-pool bandwidth.
15
B1
BW for ct1 BW for ct0
B0
For Juniper routers, the MAM model supports up to seven classes of LSPs: ct0 through ct7. In this model, total LSP subscription for each class (Ri) is limited the maximum bandwidth Bi available to that class. The total subscribed bandwidth among all classes cannot exceed the link reservable bandwidth RB. Thus, for 0 i 7, Bi RB; Ri Bi; R0+R1++R7 RB. Shows an illustration of this bandwidth model. Available link bandwidth after overhead
R0 R1 R2 R3 R4 R5 R6 R7
5.2
MPLS requires an underlying IGP protocol (such as OSPF) and traffic engineering is enabled through RSVP. Thus, prior to running the MPLS optimization and design procedures in SP Guru,
16
it is necessary to make sure these technologies are enabled. The following steps are necessary to prepare the network model for MPLS optimization and design: All nodes need at least one loopback interface. In most network models, this interface is available by default. However, if loopback interfaces are not available, use Protocols->IP>Interfaces->Create Loopback Interface to create loopback interfaces on selected or all routers. Enable OSPF on all interfaces using Protocols->IP->Routing->Configure Routing Protocols and selecting the OSPF protocol. If loopback interfaces were manually created using the above step, enable OSPF on these interfaces using Protocols->IP->Routing->Configure Routing Protocols On Loopback Interfaces. If interfaces do not already have IP addresses assigned, use Protocol->IP->Addressing->Auto Assign IP Addresses. Enable MPLS on all interfaces using Protocols->MPLS->Configure Interface Status to configure MPLS on all router interfaces or interfaces on selected links. Enable RSVP on all interfaces using Protocols->RSVP->Configure Interface Status to configure RSVP on all router interfaces or interfaces on selected links. Commit all LSPs using Protocols->MPLS->Update LSP Details
5.3
One of the input parameters for the mpls_ds_te action is the transmission link traffic class partitioning (ct0 to ct7). This value will describe how much of link available bandwidth can be allocated to a particular traffic class type. This parameter is expressed as a percentage of link bandwidth that can be allocated to LSPs carrying particular class type traffic. When importing topology from DS-TE as described in Section 3.3, this information is stored in the link topology file as depicted on Figure 10. Link class partitioning is a parameter that is used by the mpls_ds_te design action as a design constraint. During the process of routing LSP across the network, it is influencing the way LSPs are routed based on the particular class traffic limitation existing on a transmission link. This parameter can be important when optimizing existing network LSP routing designs with particular link class partitioning already in place. Link class partitioning parameters are stored inside SP Guru 11.5 indirectly. These values are not available as a part of link editable parameters. Rather they are reflected in the IP QoS parameters of transmission node. The detailed concept is based on the particular vendor implementation, i.e. Cisco or Juniper but has an overall common approach. The link traffic class partitioning is reflected in a traffic scheduler queue configuration for a specific MPLS DiffServ traffic class (ct0 to ct7). Below is a short description of the scheduling algorithms available on different routing platforms.
17
The Cisco primary queuing discipline is WFQ with some flavors of class-based and flow based modifications. Its configuration parameters are also available inside IP QoS Parameters template available on the node configuration.
18
19
capacity and two other queues servicing ct1 and ct2 traffic should be assigned 20% and 10% of interface capacity respectively. The example of DWRR scheduler profile is pictured in Figure 19.
We have three DWRR scheduler queues with 70%, 20%, and 10% of output interface bandwidth assignments. Note that we have three queue priority levels Low, High and Strict High. Strict High is a non-starving queue type. When importing the network topology from DS-TE files, the scheduler configuration is going to be deployed without user intervention. So the DWRR profiles and queue bandwidth assignments will be performed by import script. This will be sufficient for mpls_ds_te design action to perform optimum LSP routing across the network. However, this configuration is not necessarily the one existing in the network that is going to be optimized. A special care should be taken when queue priorities are considered. The import script will assign all queues one default priority that is low. If this in not the case in the network under study then these parameters should be manually corrected. When the network is not imported from DS-TE text files, the queue profiles could be created manually for each node or using SP Guru design action ip_qos_configuration available under Protocol Configuration tree. The design action can apply a user defined IP QoS template to all nodes and their interfaces inside network under study and is usually preferred to the manual method for moderate to large sized networks.
20
Guru simulation and analysis tools. The process of traffic demand creation was described in Section 4.1. One of parameters that need to be configured inside the IP QoS parameters template is traffic class field. It allows traffic classification based on different marking like DSCP or EXP MPLS header bits. An example of traffic class definition is depicted in Figure 20.
The traffic class has a Match Info property that describes the traffic characteristics:
Figure 21 shows DSCP code point markings and Forwarding class assignments. For the simulation and analysis modules, the traffic markings as configured here should refer to types of traffic
21
demands that are used in simulation and analysis. This will allow the simulation and analysis engine to match traffic demands to proper network queues that are used inside the scheduler. These traffic classes can be configured by the user or configured or by the topology import script. In the latter case, user intervention is not required while using the design action mpls_ds_te. This is because a traffic class configuration is a part of the overall IP QoS Parameters configuration. However, for simulation purposes (Flow Analysis), it is necessary to add a match info properties attribute inside the traffic class tables. This match info should describe the traffic actually used by the user inside SP Guru project. This is because the topology import script will not recognize the user traffic characteristics applying only Forwarding Class property on configured traffic classes.
6. Multi-Class MPLS Network Optimization & Design, Performance Analysis and Reports
In this section, we discuss the multi-class network design action and performance analysis action, as described in Figure 1. The multi-class network design is performed using the mpls_ds_te design action. Performance and flow analysis is performed using the flow analysis (FLAN) module. We also discuss some of the reporting capabilities of SP Guru.
6.1
This design action performs the MPLS network optimization/design procedure. To access this design action, use Design->Configure/Run Design Action and choose the mpls_ds_te design action under the Traffic Engineering tab. This will bring up the parameters related to this design action. To access the particular attributes, choose to edit the attributes. The attributes are shown in Figure 22. Note that the attributes chosen for the LSPs in the mpls_ds_te design attributes override other parameters that may have been imported or set on the LSPs, and these attributes apply to all LSPs. The following is a description of some of these attributes: Bandwidth model: RDM or MAM as discussed above. Note that this bandwidth model must match the bandwidth model on all the nodes in the network. Nodes with a different bandwidth model will be excluded from the design.
22
Maximum considered paths: The algorithm uses a k-shortest paths approach to finding candidate routes for an LSP. This attribute specifies the parameter k. The default value of 8 should be sufficient for many design studies. Primary <-> Secondary ER Relationship: Specifies whether primary and backup paths are link, node or SRG disjoint. A value of none means that no backup paths will be computed. There are numerous fields under the Primary ER Computation tab: o Maximum link subscription: This is a multiplier factor for the maximum reservable bandwidth and is used in determining the maximum primary LSP bandwidth subscription on a link. If the maximum reservable bandwidth is set to 80% of the interface, and this multiplier is set to 80%, then the overall maximum reservable bandwidth will be 64%. This multiplier is included here as a convenience so that
23
different levels of maximum reservable bandwidth can be analyzed by the design action without having to change all the interface settings. o Maximum Link Subscription Per Class Type: These are additional multipliers for the maximum link subscription for each class type. These are applied after the Maximum Link Subscription multiplier is applied. o Link Cost Metric: Used for determining the k-shortest path candidate routes for the LSPs. The k candidates are selected based on the cost metric defined here. The options include TE link cost, OSPF link cost, hops, distance, propagation delay etc. o Max Hops Per LSP: The LSP hop constraint as discussed earlier. o Max Delay Per LSP: The LSP propagation delay constraint as discussed earlier. o Optimization Objective: The overall objective of the optimization. The choices are minimize subscribed bandwidth, minimize maximum link subscription, and maximize residual bandwidth, as discussed earlier. o Advanced options include the number of random cases, random seed, LSP randomization bucket size and number of iterations. These are related to the optimization algorithm, and are discussed below. Fields in the Backup ER Computation Tab: The fields are the same as in the Primary ER Computation tab. Most field definitions are identical, but the subscription fields refer to the total bandwidth subscribed including primary and backup bandwidth.
The TE LSP routing problem is NP-hard and the solution for primary path routing has complexity of ~2|D||A|, where |D| is the number of LSPs and |A| is the number of unidirectional links (for example, in the typical design case study at the end of this document, there are |D| =184 LSPs and 12 bidirectional links, |A|=24). The complexity for a protection design is higher and includes an additional factor relating to the number of failures to be considered. The primary/backup path routing problem networks of the size considered in typical design studies is computationally notoriously difficult and finding optimal solutions is impossible. The mpls_ds_te design action uses a heuristic based approach to arrive at a quality solution satisfying the design objectives. The mpls_ds_te design action uses a heuristic approach based on LSP ordering to arrive at a network design solution with a minimum cost objective. The following is a high-level overview of the steps taken by the heuristic algorithm: The heuristic algorithm creates an initial LSP order first by holding priority, then by setup priority, then by bandwidth (high to low). For equivalent entries, LSP names are used as tiebreakers. Then, the following are performed for each run: o The algorithm first perturbs or reorders the initial LSP order using the random seed and the bucket size. The LSPs are always reordered in groups that have the same holding and setup priorities. The bucket size determines the number of LSPs perturbed at the same time a bucket size larger than the group will cause the whole group to be perturbed simultaneously. Also, if all LSPs have the same priorities and a bucket size greater than the number of LSPs is chosen, the random reordering will apply to the entire group of LSPs.
24
o The primary LSPs are routed in the created order and the LSP routes are determined based on the link cost metric, number of candidate paths in the k-shortest paths procedure and the optimization objective. o For iterations after the first, all LSPs are sequentially un-routed/rerouted with the aim of achieving a better minimum cost. o The algorithm then implements the above three steps for the secondary, or backup, LSPs to arrive at a complete solution for each run. The minimum cost solution among all runs is selected as the overall design solution.
Routing Order
Routing Order
Use Seed
Iterations m For each iteration, unroute/route LSPs one by one to see if objective can be improved Choose solution with lowest cost objective
6.2
The Flow Analysis (FLAN) module generates the routing and forwarding tables based on the network configuration. It then routes all flows. As a result of the routing, interface level utilization information is generated. Numerous steady state performance analysis results are then inferred from this information and detailed reports are made available both in the GUI and through text-based spreadsheets. Prior to running FLAN, select all LSPs by right clicking on an LSP and choosing Select Similar Paths. Then, edit the attributes and make sure the field Announce IGP Shortcuts is enabled. Click on Apply Changes to Selected Objects to update this attribute on all LSPs. Enabling this mode on all LSPs ensures that traffic flows are routed on the LSPs during FLAN. The FLAN configuration screen is shown in Figure 24.
25
Routing Order
The interval size refers to the analysis interval and typically covers the intervals for all the flows. There are different options available for collecting performance statistics: - When total traffic is highest: This would correspond to a busy interval in the network, such as peak total flow times etc. - Using peak level for each demand: This ensures the network analysis covers the worst-case scenario when the peak rates for all flows have to be carried simultaneously. - Using average level for each demand: This would correspond to a mean or average utilization analysis, but would not cover flow traffic peaks. - At specific interval: Snapshot at a certain network time
26
Note that the FLAN module also has some MPLS options. The FLAN module is capable of implementing sequential routing for LSPs (does not employ any special optimization algorithms). However, once LSPs are routed using the mpls_ds_te module, their explicit routes (for both primary and backup LSPs) are locked on the head-end LSRs. Thus, MPLS processing through FLAN is not necessary. As part of the MPLS M&P, we use FLAN to route traffic flows and for traffic steady state performance analysis.
6.3
Reports
The mpls_ds_te and FLAN modules generate two sets of reports: Logs, messages and overall design information; and detailed information about nodes, links, LSP routes, capacity and cost, flow routing and performance analysis. These reports can be accessed from Design->Results->View Reports and Flow Analysis->Results->View Reports. Most of the detailed reports are text-based spreadsheets and can be exported from the tool. Additionally, these reports contain drill-downs to further details and hyper-links to relevant node, link, LSP, flow etc. objects in the GUI. Figure 25 shows some sample log and spreadsheet reports.
In Figure 28 and Figure 29, we show the list of reports available after FLAN and the mpls_ds_te design action, respectively, are implemented. In addition to the reports, there are numerous views in the GUI that can be useful for network analysis. Some examples include the link and LSP route views illustrated in Figure 26, and the link utilization view illustrated in Figure 27. Also, subsequent to running the mpls_ds_te design action or FLAN, the node, link, LSP, flow etc. models are updated in the network. The new or updated attributes for any object(s) can be viewed from the GUI by selecting the object(s), right clicking and selecting Edit Attributes.
27
Figure 26: Illustration of link and LSP route views in the GUI
28
29
VPN traffic demands exist between each pair of core nodes. The 42 source-destination pairs are shown in Table 2 for a total VPN traffic of 15,655. The total VPN traffic between each pair of nodes was split into the four VPN classes for a total of 168 VPN demands, using the following percentages: 50% for VPN0, 25% for VPN1, 20% for VPN2 and 5% for VPN3. The VPN traffic was assumed to be of class types ct1 (for VPN0) to ct4 (for VPN3).
30
Total VPN Traffic Beijing Shanghai Guangzhou Xian Beijing Shanghai Guangzhou Xian Wuhan Chengdu Shenyang 1502.5 2209.5 456.5 283 736 690 451.5 108.5 68.5 170.5 160.5 149 94 234.5 221 23.5 58 54.5 1502.5 2209.5 451.5
Wuhan Chengdu Shenyang 736 170.5 234.5 58 36.5 36.5 34.5 85 690 160.5 221 54.5 34.5 85
The goal of the MPLS-TE design is to find routing of LSPs carrying multi-class IA and IP VPN traffic such that the network cost, defined by total consumed link bandwidth, is minimized. In the subsequent descriptions we first define the three design scenarios addressed, followed by the procedure used in the MPLS network TE design using SP Guru. Lastly we present the TE design results with a comparison and further analysis.
7.1
Design Scenarios
For the MPLS network, all traffic demands are carried on LSPs. IA demands are carried on LSPs with zero TE bandwidth and as a result routed over shortest paths. IP VPN demands are carried on LSPs with non-zero TE bandwidth where the LSPs can be single or multi-class depending on the architecture choice. In the case of single-class LSP, the LSP carries traffic from a single grade of IP VPN service and contains bandwidth request (with MPLS framing overhead included) for that grade of service, while in the case of multi-class LSP, the LSP carries traffic from multiple grades of IP VPN service and contains multiple bandwidth requests, one for each grade of service. In either case, each LSP is assigned a setup and holding priority that, combined with class type(s) associated with the LSP, will be used to decide the routing priority of the LSP in the TE design. All LSPs are protected against single link failure by end-to-end path protection backup paths. The backup paths share capacity as long as the corresponding primary paths are link-disjoint.1 Both primary and backup LSPs are routed over the given MPLS network. On all network links, 100% of the available bandwidth (after deducting SONET framing overhead) is available to the routing of LSPs (link reservable bandwidth is 100%). In addition, the available bandwidth on a network link may be pre-partitioned to various class types (again depending on the architecture choice) and the routing of LSPs must be subject to this constraint. When class based partitioning of link bandwidth is applied, MAM bandwidth model is used as the bandwidth constraint model on all network links. The routing of primary and backup LSPs is decided through traffic engineering with
1
In some cases sharing of backup bandwidth is subject to the Shared Risk Group (SRG) constraint. That is, backup paths for link disjoint primary LSPs cannot share capacity if some of the disjoint links from the primary LSPs are in the same SRG i.e., those disjoint links are actually provisioned through physical conduits that have shared risk such that one failure in the physical network may bring down all of the primary LSPs at the same time. Although we have the capability to address SRG in the protection design, we do not consider it in the case study.
31
an objective of minimizing the total link bandwidth consumed (or subscribed) by the routing of LSPs. Based on different architecture choices, three different design scenarios were considered in the MPLS TE design. These are summarized in Table 3 and Table 4, and discussed below.
LSP Type Single Class Multi Class Class Based Link Bandwidth Partitioning No Yes Scenario 1 Scenario 2 Scenario 3
Table 3: Summary of Design Scenarios Setup Priorities VPN3 VPN2 VPN1 VPN0 IA Scenario 1 1 2 3 4 7 Scenario 2 1 2 3 4 7 Scenario 3 50% BW 1 25% BW 20% BW 5% BW 7
Single class LSPs allow for more granular bandwidth control, whereas multi class LSPs present operational advantages. Design Scenario 1 This design scenario corresponds to a traffic-engineered network with single-class LSPs and no class based partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3. Each traffic type is bound to its corresponding LSP type, for example VPN3 traffic is carried on VPN3 LSPs. As discussed above, IA LSPs have zero TE bandwidth. For VPN LSPs, the TE bandwidth is set to the amount of traffic carried (plus required MPLS overhead). The links are not partitioned to classes and 100% of the link bandwidth is available for TE subscription. All LSPs have a holding priority of 1. The setup priorities are as follows: 1 for VPN3, 2 for VPN2, 3 for VPN1, 4 for VPN0 and 7 for IA LSPs. Thus, the VPN3 LSPs have the highest setup priority. Design Scenario 2 This design scenario corresponds to a traffic-engineered network with single-class LSPs and classbased partitioning of link bandwidth. Each LSP carries one traffic class, thus there are 5 types of LSPs: IA, VPN0 through VPN3, same as Design Scenario 1. All of the link bandwidth is available for TE subscription, partitioned for various class types as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%. Thus, the link partitioning uses a priori knowledge of the class type
32
bandwidth requirements. LSPs have the same holding and setup priorities as assigned in Design Scenario 1. Design Scenario 3 This design scenario corresponds to a traffic-engineered network with multi-class LSPs and classbased partitioning of link bandwidth. For Design Scenario 3, single-class LSPs are still used to carry IA traffic while the four classes of VPN traffic are carried altogether on multi-class VPN LSPs. Each multi-class LSP contain bandwidth requests for each of the four classes of VPN traffic. All of the link bandwidth is available for TE subscription, partitioned as follows: IA: 0%, VPN0: 50%, VPN1: 25%, VPN2: 20%, VPN3: 5%, same as in Design Scenario 2. All LSPs have a holding priority of 1. Single-class IA LSPs have a setup priority of 7 and multi-class VPN LSPs have a setup priority of 1.
7.2
Using SP Guru, the following steps were followed in the MPLS TE design: Import network topology and traffic demands for multiple CTs Import LSP bandwidth requests for single or multiple CTs Run design action to find routing paths of primary and backup LSPs with the objective to minimize the total consumed link bandwidth Run flow analysis that places traffic onto the routed LSPs and collects network performance data (such as hop counts and link bandwidth subscription by both primary and backup LSPs)
The MPLS network topology (nodes and links) was imported to SP Guru via Juniper configlet files. Figure 30 shows a non-geographical display2 of the MPLS network consisting of 7 Juniper T640 routers and 11 OC-48 links as viewed via the graphical user interface (GUI) of SP Guru after the import of the configlet files. As shown in the figure there is only one OC48 link between Shenyang and Beijing. For redundancy purposes, a second OC48 link between the pair was added in the MPLS TE design.
Since no coordinates are provided in the Juniper configlet files, the display of network nodes in Figure 30 does not reflect their geographical locations. The nodes were later moved to proper locations on the map (Figure 31).
33
Figure 30: China Unicom MPLS network after configlet file import to the tool
Label edge routers (LERs) were attached to each T640 node for Design Scenarios 1 and 2 where single-class LSPs are used, four LERs (each corresponding to one grade of IP VPN) were attached to each T640, while in Design Scenario 3 where multi-class LSPs are used, one LER (corresponding to all four grades of IP VPN) was attached to each T640. Figure 31 shows LERs attached to T640s in Design Scenarios 1 and 2 e.g., T640 in Beijing has 4 LERs attached: Beijing_VPN0 through Beijing_VPN3. In these scenarios, an IP VPN traffic demand of a particular grade between a pair of T640s would be modeled as a demand between a pair of LERs that correspond to the same grade and are attached to the T640s respectively. A single-class LSP to carry this demand between the LERs would be determined in the TE design. The same traffic-to-LSP binding can be achieved in Design Scenario 3 by using multi-class LSPs between LERs. Note that in this modeling each LER is attached to a T640 via two OC48 links to make it feasible to find backup LSPs. In the study we create the LERs for IP VPN traffic only. For IA traffic we leave it to be carried by LSPs between T640s.
34
Figure 31: Network model for Design Scenarios 1 and 2 after modifications
The traffic demands and LSP information (including bandwidth requests) was imported to SP Guru via text file import. Figure 32 shows the network model in Design Scenario 3 after traffic demands and LSP information are imported, where it can be seen one LER is attached to each T640, multiclass LSPs for IP VPN traffic are drawn between LERs and IA LSPs between T640s.
35
Figure 32:Network model for Design Scenario 3 showing IA and multi-class VPN LSPs
7.3
After network topology, traffic demands and LSPs import, the routing of primary and backup LSPs was determined by the mpls_ds_te design action. The mpls_ds_te design action uses a heuristic approach (as mentioned above) based on LSP ordering to arrive at a network design solution with a minimum cost objective. For this study, the cost is defined to be the total link bandwidth subscribed by the LSPs across the network. The outcome of the TE design is a set of explicitly routed LSPs, which can be evaluated based on the following metrics:
36
Subsequent to the TE design, we ran flow analysis (FLAN) available from the tool where traffic demands were placed on the primary LSP ER paths and link utilization4 was measured. We did not conduct failure analysis in this study.
7.4
SP Guru is capable of generating a rich set of class-based and summary reports on the outcome of the TE design and Flow Analysis. For example, Figure 33 to Figure 35 show portions of the LSP explicit routes and link TE subscription reports generated after the TE design, and link utilization report generated after the flow analysis, all for Design Scenario 2.
Figure 33: LSP Explicit Routes Report for Design Scenario 2 (Partial)
Link TE subscription refers to the portion of a links capacity that is reserved for all LSPs routed over the link during TE design. As discussed earlier, link subscription can be different from link utilization. 4 Link utilization, as opposed to link TE subscription, refers to the portion of a links capacity that is consumed by actual traffic. Its measurement is usually obtained through a flow analysis where actual traffic is placed onto the network. A links utilization may be higher or lower than its TE subscription.
37
Based on the information generated by the reports, we summarize the study results in the following descriptions with a comparison among the three alternative Design Scenarios. The study results are presented based on the measurement of three metrics: LSP hop count, link TE subscription and link utilization. LSP Hop Count Minimum, average and maximum hop counts5 of primary and backup LSPs are summarized in Figure 36. Note that all primary LSPs were successfully routed in all design scenarios. However, in Design Scenario 1 backup paths failed for 5 VPN LSPs (corresponding to 5 unprotected VPN demands), while in Design Scenario 3 backup paths failed for 4 multi-class VPN LSPs (corresponding to 16 unprotected VPN demands). For Design Scenario 2, all backup LSPs were successfully routed.
The calculation of hop count excludes the one hop between LER and T640 that applies to IP VPN LSPs.
38
LSP Hops (in Core Links) P r i m a r y B a c k u p Min Hops Avg Hops Max Hops Min Hops Avg Hops Max Hops
Scenario 1:184 LSPs Scenario 2:184 LSPs Scenario 3:58 LSPs IA 1 1.125 2 1 2 3 VPN 1 1.54 2 1 2.63 2 5 All 1 1.50 2 1 2.57 2 5 IA 1 1.125 2 1 2 3 VPN 1 1.57 3 1 2.77 5 All 1 1.53 3 1 2.71 5 IA 1 1.125 2 1 2 3 VPN 1 1.57 3 1 2.95 3 5 All 1 1.53 1 3 1 2.86 1,3 5
1 : Normalized to number of demands per VPN LSP 2 : Backup paths failed for 5 LSPs (5 VPN demands unprotected) 3 : Backup paths failed for 4 LSPs (16 VPN demands unprotected)
For IP VPN, the hop count for primary and backup LSPs increases from Design Scenario1 to Scenarios 2, 3 since class-based link bandwidth partitioning employed in Design Scenarios 2 and 3 reduces the chances of picking topologically shortest paths. The hop count for backup LSPs increases from Design Scenario 2 to Scenario 3 since it is more difficult for Design Scenario 3 to route multi-class backup LSPs with a much higher total bandwidth (equivalent to routing 4 singleclass LSPs simultaneously) over the residual capacity left after routing of primary LSPs, leaving the backup LSPs to be routed over longer paths. The same difficulty also results in the 4 un-routable backup LSPs in Design Scenario 3. For IA, the LSP hop counts are the same in all Design Scenarios since IA LSPs with zero TE bandwidth are always routed over shortest paths. Link TE Subscription by Primary and Backup LSPs Link TE subscription, by primary and backup LSPs respectively, is summarized in Figure 37. In Design Scenario 1 where link bandwidth is not partitioned, LSPs of higher priority with smaller bandwidth get routed first6, causing those LSPs of lower priority with larger bandwidth to be routed over longer paths (this also accounts for the 5 un-routable backup IP VPN LSPs in Design Scenario 1 as noted in Figure 36). On the other hand class-based link partitioning was employed in Design Scenarios 2 and 3, preventing higher priority LSPs from using up link capacity and leaving room for lower priority IP VPN LSPs to find shorter paths. As a result, the average link TE subscription by primary LSPs (referred to as primary link TE subscription in subsequent discussion) in Design Scenario 1 is higher than those in Design Scenarios 2 and 3. The average primary link TE
6
This behavior is a result of the internal optimization algorithm used. The algorithm is being enhanced to arrive at a better solution for the TE design.
39
subscriptions in Design Scenarios 2 and 3 are identical since the class-based link partitioning was chosen to be proportional to the distribution of multi-class traffic. Finally, the difficulty of routing multi-class LSPs, as mentioned above, led to a higher average backup link TE subscription for Design Scenario 3 as compared to the other two scenarios.
Link TE Subscription Link TE Subscription Link TE Subscription by Primary LSPs (%) by Backup LSPs (%) by All LSPs (%) Min Avg 41.32 36.30 36.30 Max 98.86 96.85 96.85 Min 0.00 0.00 0.00 Avg 35.78 18.68 38.07 Max 64.63 54.59 85.87 Min 52.73 23.67 52.29 Avg 77.02 54.98 74.37 Max 99.90 99.37 97.81
Design Scenario
Link Utilization by Primary LSPs Link utilization 7, in both forward and return directions, by traffic on the primary LSPs is summarized in Figure 38. Note that some links have link utilization greater than 100%. This is because IA LSPs are routed over shortest paths with zero TE bandwidth in this study, leaving them with no bandwidth reservation on the links. On certain links where IA traffic is routed and there is a high level of IP VPN traffic, the link utilization will be greater than 100%. Counting both forward and return directions, the total link capacity consumed by traffic on primary LSPs is 28,257 Mbps for Design Scenario 1 and 25,632 Mbps for both Design Scenarios 2 and 3. As a general technical summary, Scenario 1 is best in terms of minimum hop counts. Scenario 2 is best if all LSPs are to be successfully protected. Scenario 2 is best in terms of overall TE subscription. Scenarios 2 and 3 are best in terms of link utilization.
Total Fwd Link Design Capacity Scenario Consumed (Mbps) Scenario 1 12.36 55.51 117.84 14,520 Scenario 2 12.36 49.00 112.03 12,816 Scenario 3 12.36 49.00 112.03 12,816 Min Fwd Util (%) Avg Fwd Util (%) Max Fwd Util (%) Min Rtn Util (%) 0.00 0.00 0.00 Avg Rtn Util (%) Total Rtn Link Capacity Consumed (Mbps) 52.52 116.19 13,737 49.00 102.35 12,816 49.00 102.35 12,816 Max Rtn Util (%)
The calculation of link utilization excludes those links between LERs and T640s.
40
Scenario 2 generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs. We discuss further considerations for service providers in the next section.
7.5
From a service providers perspective, the various design scenarios addressed in the study represent different options for the service provider to use in designing its MPLS network. Each of the network performance metrics shown above corresponds to a certain key requirement for the operation of the MPLS network. For example, a maximum LSP hop count for a particular class of traffic may be needed in order to meet the end-to-end delay requirement for the class of traffic (such as max end-toend delay for VoIP). Link subscription by LSPs reflects how efficient the capacity of the MPLS network is used by multi-class traffic, and could be used to derive the total network cost or cost per unit bandwidth carried. As the study showed various design scenarios led to different measurements of network performance metrics. Depending on the particular requirements set up by the service provider for the MPLS network, some design scenario would be the best option for the service provider to adopt. For example, Design Scenario 2 (with single-class LSPs and class-based link bandwidth partitioning) generates a TE design with the lowest link utilization and overall link capacity consumed while maintaining only slightly higher hop counts for primary and backup LSPs, will be a good choice for a service provider looking to enhance the MPLS network efficiency and reduce unit bandwidth cost. Single class LSPs represent a more granular routing, thus making better use of available capacity. However, from a service providers point of view, multi class LSPs could present operational advantages. While partitioning bandwidth may result in under-utilization of existing resources, it also creates fairness in that a certain bandwidth is always available for each class type.
41
necessary to parse the output files from the other tools to a compatible format with Opnet. Furthermore, these import/export procedures may not be able to capture all required information for a given tool, and the missing information may need to be added manually or through the parser. However, both SP Guru and most internal tools employ simple text-based import/export formats and converting between these formats is usually not overly complex.
Parse SP Guru import files
Figure 39: SP Guru interface to other Lucent/Bell Labs tools and algorithms
In the next section, we present methods and procedures for performing MPLS network design using CPLEX. The input files required by CPLEX are very similar to the Node, Link and LSP text files exported by SP Guru, and the parsing process can be accomplished by making minor modifications to these text files in Excel.
42
N node set A (directed) arcs (Note: arc a = (i,j) A N N) Ca capacity of arc a. D demand set sd (td) source (destination) node of demand d. Bd bandwidth of demand d. u [0, 1] the maximum utilization on any arc xda 1 if demand d uses arc a ; 0, o.w.
Note: for Bandwidth Minimization, Set u=1, =0, =1 Load Leveling, Let u be arb, =1, <<1
Connectivity (unit flow) conservation at each node: (n,j) A xdn,j (i,n) A xdi,n = (n,sd n,td ), nN, dD Total flow on an arc cannot exceed the utilizable arc capacity: dD Bd xda u Ca aA
Figure 40: Mixed Integer Linear Program formulation of the MPLS TE design problem.
Finding optimal solutions in such problems potentially can be difficult because of the many possible link combinations that need to be considered to construct routes and that do not violate the link
43
capacities. These problems are categorized as NP-hard. In practical terms this means that amount of computational work required to find optimal solutions possibly might increase exponentially with some measure of the problem size, e.g., the number of nodes, in which case finding optimal solutions could be impractical for even modest-sized problems. Because of the potential or actual difficulty of solving NP-hard network design problems to optimality, tools developed for solving them use approximate or heuristic methods to search among the possible. Such methods generate feasible solutions and might possibly even find optimal solutions although they usually cannot guarantee provable optimality no matter how much computational effort is applied. SP Guru and INDT are examples of tools that employ heuristic methods. For example, we illustrated the heuristic method used by SP Guru in Figure 23. There are also tools available that can find optimal solutions. CPLEX is an example of such a tool. It employs a search method known as branch-and-bound or branch-and-cut that systematically searches the set of feasible possibilities for an optimal solution. In principal, branch-and-bound methods can find optima when they exists; in practice that can take an unacceptably long time, but the technique produces feasible solutions of increasing quality during the search process. Typically there is a tradeoff in solution quality (relative to optimality) and solution time. Solution techniques such as branch & bound can produce higher quality solutions than heuristic methods, but can also take more time to obtain them. Depending on the task, one tool may provide better quality solutions quicker than the others. Additionally, the customizable models available in integer programming methods can include features and capabilities that can be built in which are not currently available in other tools. As such, we provide options of both heuristic and integer programming methods for the network design to use as appropriate. In the next section we provide an introductory application of integer programming methods for MPLS design using CPLEX.
44
Pathplan when solved it solutions provide unidirectional paths or routes for LSPs based on a user-specified weighted mix of two optimization criteria: (1) minimizing total bandwidth subscription on all links, and (2) minimizing the maximum utilization on any link. Pathplan-sym similar to Pathplan, this model allows the user to impose symmetric routing when forward and reverse LSPs are required between a source-destination node pair, i.e., the reverse route will be parallel to, but oppositely directed of, that of the forward LSP routing. To utilize these models, three text input files need to be provided: nodes.txt a list of network nodes arcs.txt a file of unidirectional network arcs and their capacities demands.txt a file of LSPs to be routed including LSP identifier, source node, destination node, and LSP bandwidth. Sample formats for these text files are shown in Figure 41 for the CU network discussed previously. These sample formats can be developed from exported outputs into EXCEL from SP Guru of the network files and LSPs.
# File: nodes.txt # # Nodes for CU # # Name Beijing Shenyang Shanghai Wuhan Chengdu Guangzhou Xian
# File: arcs.txt # # Unidirectional arcs for CU # # Src Dest CAP # Forward Shenyang Beijing 2377.70 Beijing Xian 2377.70 Beijing Shanghai 2377.70 Beijing Chengdu 2377.70 Beijing Wuhan 2377.70 # File: demands.txt # # Unidirectional LSPs for CU # # LSP Name Src Beijing_Guangzhou_0 Guangzhou_Beijing_0 Beijing_Shanghai_0 Shanghai_Beijing_0 Beijing_Guangzhou_1 Beijing Guangzhou Beijing Shanghai Beijing
Dest
BW
Guangzhou 1151.37 Beijing 1151.37 Shanghai 782.95 Beijing 782.95 Guangzhou 575.69
45
Figure 41: Formats of input files for the AMPL mixed integer programming models for MPLS design. Shown from top to bottom are sample files for node.txt, arcs.txt, and demands.txt.
Once these input files have been developed the models can be run. To do this, at system prompt simply type ampl pathplan to execute the (non-symmetric) MPLS design model. This command will invoke the AMPL script to read the model and the input model data, compile a problem instance, and invoke CPLEX to solve the problem. The script is currently set up to solve the problem repeatedly for increasing quality solutions as indicated by the improving objective values of each subsequent solution. The script will continue to solve the model until a termination criterion is reached. Currently, that termination criterion is set to at 0.05% separation or gap between the best optimal solution objective value and the objective value of the (linear programming) lower bound. A sample output trace of CPLEX during the solving process is shown in Figure 42. The gap is indicated in the rightmost column of the trace.
Setting up to solve... Presolve eliminates 258 constraints and 180 variables. Adjusted problem: 3181 variables: 3180 binary variables 1 linear variable 2134 constraints, all linear; 18816 nonzeros 1 linear objective; 3180 nonzeros. CPLEX 7.5.0: integrality=1e-9 MIP emphasis: integer feasibility Node 0 1 2 3 4 Nodes Left 0 1 2 3 2 Objective 18839.0000 18839.0000 18843.1581 18846.7900 18849.0081 18850.7000 IInf 6 12 9 11 3 0 Best Integer 18862.1700 18862.1700 18862.1700 18862.1700 18862.1700 18850.7000 Cuts/ Best Node 18839.0000 Cuts: 8 18839.0000 18839.0000 18839.0000 18839.0000 ItCnt 762 786 796 808 814 819 Gap 0.12% 0.12% 0.12% 0.12% 0.06%
Gomory fractional cuts applied: 2 Using devex. Iteration log . . . Iteration: 1 Objective = 18850.700000
Times (seconds): Input = 0.37 Solve = 0.36 Output = 0.1 CPLEX 7.5.0: mixed-integer solutions limit; objective 18850.7 819 MIP simplex iterations 5 branch-and-bound nodes solve_result_num = 420 solve_result = limit
46
Alternatively, if the user wishes to terminate the process prematurely, simply enter <CTRL-Z>. Provided at least one integer solution has been found, the most recent solution is saved in the output file, pp.out. This file can be used to extract the solution for later import into SP Guru. Reformatting of the output file would be required. Samples of output files are shown in Figure 43.
Max Link Load = 99.7540% Total BW = 18850.7000 Objective = 18850.7000 (W=0.0000, B=1.0000) Lower Bnd = 18839.0000 Link Usage: Src Shenyang Beijing Beijing Beijing Beijing Xian Beijing Chengdu Wuhan Shanghai Beijing Xian Shanghai Chengdu Wuhan Shanghai Guangzhou Guangzhou Guangzhou Guangzhou
Dest Beijing Xian Shanghai Chengdu Wuhan Shanghai Guangzhou Guangzhou Guangzhou Guangzhou Shenyang Beijing Beijing Beijing Beijing Xian Beijing Chengdu Wuhan Shanghai
BW (Util %) 1298.0700 ( 617.4900 ( 2097.4300 ( 1175.9800 ( 461.2700 ( 268.3600 ( 2371.8500 ( 290.4700 ( 99.8600 ( 744.5700 ( 1298.0700 ( 617.4900 ( 2139.9300 ( 1131.8100 ( 462.9400 ( 268.3600 ( 2371.8500 ( 246.3000 ( 101.5300 ( 787.0700 ( 3 / 1.5476 1151.37 1151.37 782.95 782.95 575.69 575.69 359.56 359.56
54.594%) 25.970%) 88.213%) 49.459%) 19.400%) 11.287%) 99.754%) 12.216%) 4.200%) 31.315%) 54.594%) 25.970%) 90.000%) 47.601%) 19.470%) 11.287%) 99.754%) 10.359%) 4.270%) 33.102%)
Primary Hops Max/Avg: Demands: Beijing_Guangzhou_0 Guangzhou_Beijing_0 Beijing_Shanghai_0 Shanghai_Beijing_0 Beijing_Guangzhou_1 Guangzhou_Beijing_1 ing_Shenyang_0 Shenyang_Beijing_0
Also output is the values of all solutions and the progression of the optimization process as the solver seeks solution improvement. This output is provided in the file pp.val, a sample of which is shown in Figure 44.
47
Start time: Sat Oct 22 05:40:45 EDT 2005 Max Util 99.755% 99.754% 99.755% 99.755% 99.755% Total BW 18862.1700 18850.7000 18850.6800 18850.6600 18850.6600 Obj Fcn 18862.1700 18850.7000 18850.6800 18850.6600 18850.6600 Lower Bnd 18839.0000 18839.0000 18839.0000 18839.0000 Gap 0.123% 0.062% 0.062% 0.062% Soln Time Stamp 10/22/05 05:40:50 10/22/05 05:40:52 10/22/05 06:03:48 10/22/05 06:03:51
Figure 44: Sample of the output file pp.val showing progression of the optimization.
An additional feature of this model is that the user has the ability to change the objective function from total bandwidth minimization to minimizing the maximum link utilization (note that these are equivalent to the objective options available in SP Guru). This is done by modifying B and W parameters in the file pathplan.parm as shown in Figure 45. Note that setting W=0 results in an objective that is purely total bandwidth minimization while setting B=0 produces an objective that is minimization of maximum link utilization. Setting both of these parameters to non-zero values yields and objective that is a weighted mix of these two. This is useful if, e.g., one wants to simultaneously minimize the two objective terms, total bandwidth and maximum link utilization. This weighted implementation of simultaneous objectives in not available in SP Guru.
### File: pathplan.parm ### Parameters for model PATHPLAN (v2005.05.21) ### G. Atkinson, 2005.05.21 #param W := 100000; param W := 0; param B := 1; # Weight for max link utilization objective term # Weight for max link utilization objective term # Weight for bandwidth objective term
Figure 45: Sample of the parameter file pathplan.parm showing the B and W parameters.
An identical process is used for the model pathplan-sym. Note that the import and export procedures and associated file formatting required to provide inputs to, or extract results from, the model solutions is currently done manually. However, this process can and likely will be streamlined by automating it in the near future using Opnets ODK available with SP Guru.
48
The failed objects can be selected from the GUI by right-clicking on the objects and selecting to fail them, or the failure analysis module can iterate through all single and pair-wise link, node and/or shared risk group failures and report on the comparative impact of each failure. An arbitrary number of selected objects can also be failed simultaneously, but most practical failure analyses typically involve single or double failures. Figure 47 Shows a sample report generated by the failure analysis module after iterating over all single link failures that summarizes the impact of these failures.
49
10.2 NetDoctor
NetDoctor is a module available in SP Guru that can be used to analyze configuration errors, policy violations and inefficiencies in a network. The configuration and setup is compared against a set of policies and rules. The configuration analysis is then reported in HTML format and can be referred to in order to make changes and corrections in the network configuration. To run NetDoctor, select NetDoctor->Configure/Run NetDoctor. This will load the pre-defined rules and open a window in which the rules to check can be selected. In Figure 48, we show an example NetDoctor configuration window with IP and MPLS rules are selected. It is also possible to generate custom rules, but those methods are beyond the scope of this document.
50
In Figure 49, we show a sample NetDoctor report summary page that lists the potential issues that were found for a customer study. Each of the fields can be drilled down to in order to access specific reports, as shown in Figure 50, that shows a portion of the detailed information related to the static routes. For this particular network, the warnings highlighted some issues such as unadvertised interfaces, static route definitions with unknown next hops, overlapping subnets and network statements referencing invalid interfaces.
51
52
Note that if there are unreachable interfaces, this may be due to configuration error and a detailed analysis of the configuration can be performed using NetDoctor to identify the problems.
53