Unit 4 High Speed Network
Unit 4 High Speed Network
Unit 4 High Speed Network
Support
uality-of-Service (QoS) refers to traffic control mechanisms that seek to either
differentiate performance based on application or network-operator
requirements or provide predictable or guaranteed performance to applications,
sessions, or traffic aggregates. Basic phenomenon for QoS means in terms of
packet delay and losses of various kinds.
Need for QoS –
Video and audio conferencing require bounded delay and loss rate.
Video and audio streaming requires bounded packet loss rate, it may not be
so sensitive to delay.
Time-critical applications (real-time control) in which bounded delay is
considered to be an important factor.
Valuable applications should be provided better services than less valuable
applications.
QoS Specification –
QoS requirements can be specified as:
1. Delay
2. Delay Variation(Jitter)
3. Throughput
4. Error Rate
There are two types of QoS Solutions:
1. Stateless Solutions –
Routers maintain no fine-grained state about traffic, one positive factor of it
is that it is scalable and robust. But it has weak services as there is no
guarantee about the kind of delay or performance in a particular application
which we have to encounter.
2. Stateful Solutions –
Routers maintain a per-flow state as flow is very important in providing the
Quality-of-Service i.e. providing powerful services such as guaranteed
services and high resource utilization, providing protection, and is much less
scalable and robust.
Integrated Services(IntServ) –
1. An architecture for providing QoS guarantees in IP networks for individual
application sessions.
2. Relies on resource reservation, and routers need to maintain state
information of allocated resources and respond to new call setup requests.
3. Network decides whether to admit or deny a new call setup request.
IntServ QoS Components –
Resource reservation: call setup signaling, traffic, QoS declaration, per-
element admission control.
QoS-sensitive scheduling e.g WFQ queue discipline.
QoS-sensitive routing algorithm(QSPF)
QoS-sensitive packet discard strategy.
RSVP-Internet Signaling –
It creates and maintains distributed reservation state, initiated by the receiver
and scales for multicast, which needs to be refreshed otherwise reservation
times out as it is in soft state. Latest paths were discovered through “PATH”
messages (forward direction) and used by RESV messages (reserve direction).
Call Admission –
Session must first declare it’s QoS requirement and characterize the traffic it
will send through the network.
R-specification: defines the QoS being requested, i.e. what kind of bound
we want on the delay, what kind of packet loss is acceptable, etc.
T-specification: defines the traffic characteristics like bustiness in the traffic.
A signaling protocol is needed to carry the R-spec and T-spec to the routers
where reservation is required.
Routers will admit calls based on their R-spec, T-spec and based on the
current resource allocated at the routers to other calls.
Diff-Serv –
Differentiated Service is a stateful solution in which each flow doesn’t mean a
different state. It provides reduced state services i.e. maintaining state only for
larger granular flows rather than end-to-end flows tries to achieve the best of
both worlds.
Intended to address the following difficulties with IntServ and RSVP:
1. Flexible Service Models:
IntServ has only two classes, want to provide more qualitative service
classes: want to provide ‘relative’ service distinction.
2. Simpler signaling:
Many applications and users may only want to specify a more qualitative
notion of service.
Streaming Live Multimedia –
Examples: Internet radio talk show, Live sporting event.
Streaming: playback buffer, playback buffer can lag tens of seconds after
and still have timing constraint.
Interactivity: fast forward is impossible, but rewind and pause is possible.
QoS is typically applied to networks that carry traffic for resource-intensive systems.
Common services for which it is required include internet protocol television (IPTV), online
gaming, streaming media, videoconferencing, video on demand (VOD), and Voice over IP
(VoIP).
Using QoS in networking, organizations have the ability to optimize the performance of
multiple applications on their network and gain visibility into the bit rate, delay, jitter, and
packet rate of their network. This ensures they can engineer the traffic on their network and
change the way that packets are routed to the internet or other networks to avoid
transmission delay. This also ensures that the organization achieves the expected service
quality for applications and delivers expected user experiences.
As per the QoS meaning, the key goal is to enable networks and organizations to prioritize
traffic, which includes offering dedicated bandwidth, controlled jitter, and lower latency. The
technologies used to ensure this are vital to enhancing the performance of business
applications, wide-area networks (WANs), and service provider networks.
How Does QoS Work?
QoS networking technology works by marking packets to identify service types, then
configuring routers to create separate virtual queues for each application, based on their
priority. As a result, bandwidth is reserved for critical applications or websites that have
been assigned priority access.
QoS technologies provide capacity and handling allocation to specific flows in network
traffic. This enables the network administrator to assign the order in which packets are
handled and provide the appropriate amount of bandwidth to each application or traffic flow.
Types of Network Traffic
Understanding how QoS network software works is reliant on defining the various types of
traffic that it measures. These are:
1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
2. Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a packet
waits in a queue before being transmitted. QoS enables organizations to avoid this by
creating a priority queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to
network congestion. QoS enables organizations to decide which packets to drop in this
event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can
result in packets arriving late and out of sequence. This can cause distortion or gaps in
audio and video being delivered.
Implementing QoS begins with an enterprise identifying the types of traffic that are
important to them, use high volumes of bandwidth, and/or are sensitive to latency or packet
loss.
This helps the organization understand the needs and importance of each traffic type on its
network and design an overall approach. For example, some organizations may only need
to configure bandwidth limits for specific services, whereas others may need to fully
configure interface and security policy bandwidth limits for all their services, as well as
prioritize queuing critical services relative to traffic rate.
The organization can then deploy policies that classify traffic and ensure the availability and
consistency of its most important applications. Traffic can be classified by port or internet
protocol (IP), or through a more sophisticated approach such as by application or user.
Bandwidth management and queuing tools are then assigned roles to handle traffic flow
specifically based on the classification they received when they entered the network. This
allows for packets within traffic flows to be stored until the network is ready to process them.
Priority queuing can also be used to ensure the necessary availability and minimal latency
of network performance for important applications and traffic. This is so that the network’s
most important activities are not starved of bandwidth by those of lesser priority.
Furthermore, bandwidth management measures and controls traffic flow on the network
infrastructure to ensure it does not exceed capacity and prevent congestion. This includes
using traffic shaping, a rate-limiting technique that optimizes or guarantees performance
and increases usable bandwidth, and scheduling algorithms, which offer several methods
for providing bandwidth to specific traffic flows.
Why is QoS Important?
When networks only carried data, speed was not overly critical. But now, interactive
applications carrying audio and video content need to be delivered at high speed, without
packet loss or variations in delivery speed.
QoS is particularly important to guarantee the high performance of critical applications that
require high bandwidth for real-time traffic. For example, it helps businesses to prioritize the
performance of “inelastic” applications that often have minimum bandwidth requirements,
maximum latency limits, and high sensitivity to jitter and latency, such as VoIP and
videoconferencing.
QoS helps businesses prevent the delay of these sensitive applications, ensuring they
perform to the level that users require. For example, lost packets could cause a delay to the
stream, which results in the sound and video quality of a videoconference call to become
choppy and indecipherable.
QoS is also becoming increasingly important as the Internet of Things (IoT) continues to
come to maturity. For example, in the manufacturing sector, machines now leverage
networks to provide real-time status updates on any potential issues. Therefore, any delay
in feedback could cause highly costly mistakes in IoT networking. QoS enables the data
stream to take priority in the network and ensures that the information flows as quickly as
possible.
Cities are now filled with smart sensors that are vital to running large-scale IoT projects
such as smart buildings. The data collected and analyzed, such as humidity and
temperature data, is often highly time-sensitive and needs to be identified, marked, and
queued appropriately.
What Techniques and Best Practices Are Involved in
QoS?
Techniques
There are several techniques that businesses can use to guarantee the high performance of
their most critical applications. These include:
Prioritization of delay-sensitive VoIP traffic via routers and switches: Many enterprise
networks can become overly congested, which sees routers and switches start dropping
packets as they come in and out faster than they can be processed. As a result, streaming
applications suffer. Prioritization enables traffic to be classified and receive different
priorities depending on its type and destination. This is particularly useful in a situation of
high congestion, as packets with higher priority can be sent ahead of other traffic.
Resource reservation: The Resource Reservation Protocol (RSVP) is a transport layer
protocol that reserves resources across a network and can be used to deliver specific levels
of QoS for application data streams. Resource reservation enables businesses to divide
network resources by traffic of different types and origins, define limits, and guarantee
bandwidth.
Queuing: Queuing is the process of creating policies that provide preferential treatment to
certain data streams over others. Queues are high-performance memory buffers in routers
and switches, in which packets passing through are held in dedicated memory areas. When
a packet is assigned higher priority, it is moved to a dedicated queue that pushes data at a
faster rate, which reduces the chances of it being dropped. For example, businesses can
assign a policy to give voice traffic priority over the majority of network bandwidth. The
routing or switching device will then move this traffic’s packets and frames to the front of the
queue and immediately transmit them.
Traffic marking: When applications that require priority over other bandwidth on a network
have been identified, the traffic needs to be marked. This is possible through processes like
Class of Service (CoS), which marks a data stream in the Layer 2 frame header, and
Differentiated Services Code Point (DSCP), which marks a data stream in the Layer 3
packet header.
Best Practices
In addition to these techniques, there are also several best practices that organizations
should keep in mind when determining their QoS requirements.
1. Ensure that maximum bandwidth limits at the source interface and security policy are not
set too low to prevent excessive packet discard.
2. Consider the ratio at which packets are distributed between available queues and which
queues are used by which services. This can affect latency levels, queue distribution, and
packet assignment.
3. Only place bandwidth guarantees on specific services. This will avoid the possibility of all
traffic using the same queue in high-volume situations.
4. Configure prioritization for all traffic through either type of service-based priority or security
policy priority, not both. This will simplify analysis and troubleshooting.
5. Try to minimize the complexity of QoS configuration to ensure high performance.
6. To get accurate testing results, use the User Datagram Protocol (UDP), and do not
oversubscribe bandwidth throughput.
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the availability of their
business-critical applications. It is vital for delivering differentiated bandwidth and ensuring
data transmission takes place without interrupting traffic flow or causing packet losses.
Major advantages of deploying QoS include:
QoS enables an organization to prioritize traffic and resources to guarantee the promised
performance of a specific application or service. It also enables enterprises to prioritize
different applications, data flows, and users in order to guarantee the optimum level of
performance across their networks.
Fortinet enables QoS through FortiGate SD-WAN. Learn more about SD-WAN and how it
can extend your high performance network across branch offices: