0% found this document useful (0 votes)
6 views114 pages

Network 2

Uploaded by

pvarshinibca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
6 views114 pages

Network 2

Uploaded by

pvarshinibca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 114

UNIT - I

NETWORK LAYER

Network layer: Network Layer Services, Packet Switching, Performance, provided


transport layers, implementation connectionless services, implementation
connection oriented services, comparison of virtual –circuit and datagram
subnets.IPV4 Address, Forwarding of IP Packets, Internet Protocol, ICMP v4,
Mobile IP.
What is a network?

A network is a group of two or more connected computing devices. Usually


all devices in the network are connected to a central hub — for instance, a router.
A network can also include subnetworks, or smaller subdivisions of the network.
Subnetworking is how very large networks, such as those provided by ISPs, are
able to manage thousands of IP addresses and connected devices.

Think of the Internet as a network of networks: computers are connected to


each other within networks, and these networks connect to other networks. This
enables these computers to connect with other computers both near and far.

What happens at the network layer?


Anything that has to do with inter-network connections takes place at the
network layer. This includes setting up the routes for data packets to take, checking
to see if a server in another network is up and running, and addressing and
receiving IP packets from other networks. This last process is perhaps the most
important, as the vast majority of Internet traffic is sent over IP.

What is a packet?

All data sent over the Internet is broken down into smaller chunks called
"packets." When Bob sends Alice a message, for instance, his message is broken
down into smaller pieces and then reassembled on Alice's computer. A packet has
two parts: the header, which contains information about the packet itself, and the
body, which is the actual data being sent.
At the network layer, networking software attaches a header to each packet when
the packet is sent out over the Internet, and on the other end, networking software
can use the header to understand how to handle the packet.

A header contains information about the content, source, and destination of each
packet (somewhat like stamping an envelope with a destination and return
address). For example, an IP header contains the destination IP address of each
packet, the total size of the packet, an indication of whether or not the packet has
been fragmented (broken up into still smaller pieces) in transit, and a count of how
many networks the packet has traveled through.

Responsibilities/Functions of Network Layer

Given below are some other responsibilities of the Network layer.

1. The network layer breaks the larger packets into small packets.
2. Connection services are provided including network layer flow control,
network layer error control, and packet sequence control.

1. LogicalAddressing
Physical addressing implemented by the data link layer handles the
problem of addressing locally. A header is added by the Network layer to the
packet coming from the upper layer that also includes logical addresses of
the sender and the receiver.
2. Routing
At the time when independent networks or links are connected together in
order to create internetworks/large network, then the routing devices(router
or switches) route the packets to their final destination. This is one of the
main functions of the network layer.

Services provided by the Network Layer

1.GuaranteeddeliveryofPackets
The network layer guarantees that the packet will reach its destination.

2.Guaranteed delivery with the bounded delay


It is another service provided by the network layer and it guarantees that the packet
will surely be delivered within a specified host-to-host delay bound.
3. Transfer of packets in Order
According to this service, it is ensured that packets arrive at the destination in the
same order in which they are sent by the sender.

4.Security
Security is provided by the network layer by using a session key between the
source host and the destination host.

Advantages of Network Layer Services

Given below are some benefits of services provided by the network layer:

 By forwarding service of the network layer, the data packets are transferred
from one place to another in the network.
 In order to reduce the traffic, the routers in the network layer create
collisions and broadcast the domains.
 Failure in the data communication system gets eliminated by packetization.

Disadvantages of Network layer Services

 In the design of the network layer, there is a lack of flow control


 In the network layer, there is a lack of proper error control mechanisms; due
to the presence of fragmented data packets the implementation of error
control mechanism becomes difficult.
 Due to the presence of too many datagrams there happens occurrence of
congestion.
In the above figure, the network layer at the A node sends the packet to the
network layer at the B node. When the packet arrives at router B then the router
makes the decision of the path based on the final destination that is the F node of
the packet transmitted. Router B makes use of its routing table for finding the next
hop that is router E. The Network layer at node B sends the packet to the network
layer at E which then sends the packet to the network layer at F.

Design Issues with Network Layer

 A key design issue is determining how packets are routed from source to
destination. Routes can be based on static tables that are wired into the
network and rarely changed. They can also be highly dynamic, being
determined anew for each packet, to reflect the current network load.
 If too many packets are present in the subnet at the same time, they will get
into one another's way, forming bottlenecks. The control of such
congestion also belongs to the network layer.
 Moreover, the quality of service provided(delay, transmit time, jitter, etc) is
also a network layer issue.
 When a packet has to travel from one network to another to get to its
destination, many problems can arise such as:
o The addressing used by the second network may be different from the
first one.
o The second one may not accept the packet at all because it is too
large.
o The protocols may differ, and so on.
 It is up to the network layer to overcome all these problems to allow
heterogeneous networks to be interconnected.

Network Layer Services- Packetizing, Routing and Forwarding


The network Layer is the third layer in the OSI model of computer networks. Its
main function is to transfer network packets from the source to the destination. It
is involved both the source host and the destination host. At the source, it accepts
a packet from the transport layer, encapsulates it in a datagram, and then delivers
the packet to the data link layer so that it can further be sent to the receiver. At
the destination, the datagram is decapsulated, and the packet is extracted and
delivered to the corresponding transport layer.
Features of Network Layer
1. The main responsibility of the Network layer is to carry the data packets
from the source to the destination without changing or using them.
2. If the packets are too large for delivery, they are fragmented i.e., broken
down into smaller packets.
3. It decides the route to be taken by the packets to travel from the source to the
destination among the multiple routes available in a network (also called
routing).
4. The source and destination addresses are added to the data packets inside the
network layer.
Services Offered by Network Layer
The services which are offered by the network layer protocol are as follows:
1. Packetizing
2. Routing
3. Forwarding
1. Packetizing
The process of encapsulating the data received from the upper layers of the
network (also called payload) in a network layer packet at the source and
decapsulating the payload from the network layer packet at the destination is
known as packetizing.
The source host adds a header that contains the source and destination address
and some other relevant information required by the network layer protocol to
the payload received from the upper layer protocol and delivers the packet to the
data link layer.
The destination host receives the network layer packet from its data link layer,
decapsulates the packet, and delivers the payload to the corresponding upper
layer protocol. The routers in the path are not allowed to change either the source
or the destination address. The routers in the path are not allowed to decapsulate
the packets they receive unless they need to be fragmented.

Packetizing

2. Routing
Routing is the process of moving data from one device to another device. These
are two other services offered by the network layer. In a network, there are a
number of routes available from the source to the destination. The network layer
specifies some strategies which find out the best possible route. This process is
referred to as routing. There are a number of routing protocols that are used in
this process and they should be run to help the routers coordinate with each other
and help in establishing communication throughout the network.

Routing

3. Forwarding

Forwarding is simply defined as the action applied by each router when a packet
arrives at one of its interfaces. When a router receives a packet from one of its
attached networks, it needs to forward the packet to another attached network
(unicast routing) or to some attached networks (in the case of multicast routing).
Routers are used on the network for forwarding a packet from the local network
to the remote network. So, the process of routing involves packet forwarding
from an entry interface out to an exit interface.

Forwarding

Difference between Routing and Forwarding


Routing Forwarding

Forwarding is simply defined as the


Routing is the process of moving data
action applied by each router when a
from one device to another device.
packet arrives at one of its interfaces.

Operates on the Network Layer. Operates on the Network Layer.

Work is based on Forwarding Table. Checks the forwarding table and work
Routing Forwarding

according to that.

Works on protocols like Routing


Works on protocols like UDP
Information Protocol (RIP) for
Encapsulating Security Payloads
Routing.

Other Services Expected from Network Layer


1. Error Control
2. Flow Control
3. Congestion Control
1. Error Control
Although it can be implemented in the network layer, it is usually not preferred
because the data packet in a network layer may be fragmented at each router,
which makes error-checking inefficient in the network layer.
2. Flow Control
It regulates the amount of data a source can send without overloading the
receiver. If the source produces data at a very faster rate than the receiver can
consume it, the receiver will be overloaded with data. To control the flow of
data, the receiver should send feedback to the sender to inform the latter that it is
overloaded with data.
There is a lack of flow control in the design of the network layer. It does not
directly provide any flow control. The datagrams are sent by the sender when
they are ready, without any attention to the readiness of the receiver.
3. Congestion Control
Congestion occurs when the number of datagrams sent by the source is beyond
the capacity of the network or routers. This is another issue in the network layer
protocol. If congestion continues, sometimes a situation may arrive where the
system collapses and no datagrams are delivered. Although congestion control is
indirectly implemented in the network layer, still there is a lack of congestion
control in the network layer.
Advantages of Network Layer Services
 Packetization service in the network layer provides ease of transportation of
the data packets.
 Packetization also eliminates single points of failure in data communication
systems.
 Routers present in the network layer reduce network traffic by creating
collision and broadcast domains.
 With the help of Forwarding, data packets are transferred from one place to
another in the network.
Disadvantages of Network Layer Services
 There is a lack of flow control in the design of the network layer.
 Congestion occurs sometimes due to the presence of too many datagrams in
a network that is beyond the capacity of the network or the routers. Due to
this, some routers may drop some of the datagrams, and some important
pieces of information may be lost.
 Although indirect error control is present in the network layer, there is a lack
of proper error control mechanisms as due to the presence of fragmented data
packets, error control becomes difficult to implement.

Packet Switching and Delays in Computer Network


Packet switching is a method of transferring data to a network in form of
packets. In order to transfer the file fast and efficiently manner over the network
and minimize the transmission latency, the data is broken into small pieces of
variable length, called Packet. At the destination, all these small parts (packets)
have to be reassembled, belonging to the same file. A packet composes of a
payload and various control information. No pre-setup or reservation of
resources is needed.
Packet Switching uses Store and Forward technique while switching the
packets; while forwarding the packet each hop first stores that packet than
forward. This technique is very beneficial because packets may get discarded at
any hop due to some reason. More than one path is possible between a pair of
sources and destinations. Each packet contains the Source and destination
address using which they independently travel through the network. In other
words, packets belonging to the same file may or may not travel through the
same path. If there is congestion at some path, packets are allowed to choose
different paths possible over an existing network.
Packet-Switched networks were designed to overcome the weaknesses of
Circuit-Switched networks since circuit-switched networks were not very
effective for small messages.
Packet switching is a technique used in computer networks to transmit data in
the form of packets, which are small units of data that are transmitted
independently across the network. Each packet contains a header, which includes
information about the packet’s source and destination, as well as the data
payload.

Here are some of the types of delays that can occur in packet switching:

1. Transmission delay: This is the time it takes to transmit a packet over a


link. It is affected by the size of the packet and the bandwidth of the link.
2. Propagation delay: This is the time it takes for a packet to travel from the
source to the destination. It is affected by the distance between the two nodes
and the speed of light.
3. Processing delay: This is the time it takes for a packet to be processed by a
node, such as a router or switch. It is affected by the processing capabilities
of the node and the complexity of the routing algorithm.
4. Queuing delay: This is the time a packet spends waiting in a queue before it
can be transmitted. It is affected by the number of packets in the queue and
the priority of the packets.
while packet switching can introduce delays in the transmission process, it is
generally more efficient than circuit switching and can support a wider range of
applications. To minimize delays, various techniques can be used, such as
optimizing routing algorithms, increasing link bandwidth, and using quality of
service (QoS) mechanisms to prioritize certain types of traffic.
Advantages of Packet Switching over Circuit Switching:
 More efficient in terms of bandwidth, since the concept of reserving a circuit
is not there.
 Minimal transmission latency.
 More reliable as a destination can detect the missing packet.
 More fault tolerant because packets may follow a different path in case any
link is down, Unlike Circuit Switching.
 Cost-effective and comparatively cheaper to implement.
Disadvantage of Packet Switching over Circuit Switching:
 Packet Switching doesn’t give packets in order, whereas Circuit Switching
provides ordered delivery of packets because all the packets follow the same
path.
 Since the packets are unordered, we need to provide sequence numbers for
each packet.
 Complexity is more at each node because of the facility to follow multiple
paths.
 Transmission delay is more because of rerouting.
 Packet Switching is beneficial only for small messages, but for bursty data
(large messages) Circuit Switching is better.
Modes of Packet Switching:
1. Connection-oriented Packet Switching (Virtual Circuit): Before starting
the transmission, it establishes a logical path or virtual connection using a
signaling protocol, between sender and receiver and all packets belongs to this
flow will follow this predefined route. Virtual Circuit ID is provided by
switches/routers to uniquely identify this virtual connection. Data is divided into
small units and all these small units are appended with help of sequence
numbers. Packets arrive in order at the destination. Overall, three phases take
place here- The setup, data transfer and tear-down phases.
All address information is only transferred during the setup phase. Once the
route to a destination is discovered, entry is added to the switching table of each
intermediate node. During data transfer, packet header (local header) may
contain information such as length, timestamp, sequence number, etc.
Connection-oriented switching is very useful in switched WAN. Some popular
protocols which use the Virtual Circuit Switching approach are X.25, Frame-
Relay, ATM, and MPLS(Multi-Protocol Label Switching).
2. Connectionless Packet Switching (Datagram): Unlike Connection-oriented
packet switching, In Connectionless Packet Switching each packet contains all
necessary addressing information such as source address, destination address,
port numbers, etc. In Datagram Packet Switching, each packet is treated
independently. Packets belonging to one flow may take different routes because
routing decisions are made dynamically, so the packets that arrived at the
destination might be out of order. It has no connection setup and teardown phase,
like Virtual Circuits.
Packet delivery is not guaranteed in connectionless packet switching, so reliable
delivery must be provided by end systems using additional protocols.
A---R1---R2---B

A is the sender (start)


R1, R2 are two routers that store and forward data
B is receiver(destination)
To send a packet from A to B there are delays since this is a Store and Forward
network.

Delays in Packet switching:

1. Transmission Delay: Time required by station to transmit data to the link.


2. Propagation Delay: Time of data propagation through the link.
3. Queuing Delay: Time spend by the packet at the destination’s queue.
4. Processing Delay: Processing time for data at the destination.

Performance of a Network
The performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the network.
Finding the performance of a network depends on both quality of the network
and the quantity of the network.
Parameters for Measuring Network Performance
 Bandwidth
 Latency (Delay)
 Bandwidth – Delay Product
 Throughput
 Jitter

BANDWIDTH

One of the most essential conditions of a website’s performance is the amount of


bandwidth allocated to the network. Bandwidth determines how rapidly the
webserver is able to upload the requested information. While there are different
factors to consider with respect to a site’s performance, bandwidth is every now
and again the restricting element.
Bandwidth is characterized as the measure of data or information that can be
transmitted in a fixed measure of time. The term can be used in two different
contexts with two distinctive estimating values. In the case of digital devices, the
bandwidth is measured in bits per second(bps) or bytes per second. In the case of
analog devices, the bandwidth is measured in cycles per second, or Hertz (Hz).
Bandwidth is only one component of what an individual sees as the speed of a
network. People frequently mistake bandwidth with internet speed in light of the
fact that Internet Service Providers (ISPs) tend to claim that they have a fast
“40Mbps connection” in their advertising campaigns. True internet speed is
actually the amount of data you receive every second and that has a lot to do
with latency too. “Bandwidth” means “Capacity” and “Speed” means
“Transfer rate”.
More bandwidth does not mean more speed. Let us take a case where we have
double the width of the tap pipe, but the water rate is still the same as it was
when the tap pipe was half the width. Hence, there will be no improvement in
speed. When we consider WAN links, we mostly mean bandwidth but when we
consider LAN, we mostly mean speed. This is on the grounds that we are
generally constrained by expensive cable bandwidth over WAN rather than
hardware and interface data transfer rates (or speed) over LAN.
 Bandwidth in Hertz: It is the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass. For example, let us
consider the bandwidth of a subscriber telephone line as 4 kHz.
 Bandwidth in Bits per Seconds: It refers to the number of bits per second
that a channel, a link, or rather a network can transmit. For example, we can
say the bandwidth of a Fast Ethernet network is a maximum of 100 Mbps,
which means that the network can send 100 Mbps of data.
Note: There exists an explicit relationship between the bandwidth in hertz and
the bandwidth in bits per second. An increase in bandwidth in hertz means an
increase in bandwidth in bits per second. The relationship depends upon whether
we have baseband transmission or transmission with modulation.

LATENCY

In a network, during the process of data communication, latency(also known as


delay) is defined as the total time taken for a complete message to arrive at the
destination, starting with the time when the first bit of the message is sent out
from the source and ending with the time when the last bit of the message is
delivered at the destination. The network connections where small delays occur
are called “Low-Latency-Networks” and the network connections which suffer
from long delays are known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network communication.
It stops the data from taking full advantage of the network pipe and conclusively
decreases the bandwidth of the communicating network. The effect of the
latency on a network’s bandwidth can be temporary or never-ending depending
on the source of the delays. Latency is also known as a ping rate and is measured
in milliseconds(ms).
 In simpler terms latency may be defined as the time required to successfully
send a packet across a network.
 It is measured in many ways like a round trip, one-way, etc.
 It might be affected by any component in the chain utilized to vehiculate
data, like workstations, WAN links, routers, LAN, and servers, and
eventually may be limited for large networks, by the speed of light.
Latency = Propagation Time + Transmission Time + Queuing Time +
Processing Delay
Propagation Time
It is the time required for a bit to travel from the source to the destination.
Propagation time can be calculated as the ratio between the link length (distance)
and the propagation speed over the communicating medium. For example, for an
electric signal, propagation time is the time taken for the signal to travel through
a wire.
Propagation time = Distance / Propagation speed
Example:
Input: What will be the propagation time when the distance between two points
is 12, 000 km?
Assuming the propagation speed to be 2.4 * 10^8 m/s in cable.

Output: We can calculate the propagation time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission Time
Transmission Time is a time based on how long it takes to send the signal down
the transmission line. It consists of time costs for an EM signal to propagate
from one side to the other, or costs like the training signals that are usually put
on the front of a packet by the sender, which helps the receiver synchronize
clocks. The transmission time of a message relies upon the size of the message
and the bandwidth of the channel.
Transmission time = Message size / Bandwidth
Example:
Input: What will be the propagation time and the transmission time for a 2.5-
kbyte
message when the bandwidth of the network is 1 Gbps? Assuming the
distance between
sender and receiver is 12, 000 km and speed of light is 2.4 * 10^8 m/s.

Output: We can calculate the propagation and transmission time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission time = (2560 * 8) / 10^9 = 0.020 ms

Note: Since the message is short and the bandwidth is high, the dominant factor
is the
propagation time and not the transmission time(which can be ignored).
Queuing Time
Queuing time is a time based on how long the packet has to sit around in the
router. Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes
with the load thrust in the network. In cases like these, the packet sits waiting,
ready to go, in a queue. These delays are predominantly characterized by the
measure of traffic on the system. The more the traffic, the more likely a packet is
stuck in the queue, just sitting in the memory, waiting.
Processing Delay
Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the
packet for transmission. These costs are predominantly based on the complexity
of the protocol. The router must decipher enough of the packet to make sense of
which queue to put the packet in. Typically the lower-level layers of the stack
have simpler protocols. If a router does not know which physical port to send the
packet to, it will send it to all the ports, queuing the packet in many queues
immediately. Differently, at a higher level, like in IP protocols, the processing
may include making an ARP request to find out the physical address of the
destination before queuing the packet for transmission. This situation may also
be considered as a processing delay.

BANDWIDTH – DELAY PRODUCT

Bandwidth and Delay are two performance measurements of a link. However,


what is significant in data communications is the product of the two, the
bandwidth-delay product. Let us take two hypothetical cases as examples.
Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let
us find the bandwidth-delay product in this case. From the image, we can say
that this product 1 x 5 is the maximum number of bits that can fill the link. There
can be close to 5 bits at any time on the link.
Bandwidth Delay Product

Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at each
second, there are 3 bits on the line and the duration of each bit is 0.33s.
Bandwidth Delay

For both examples, the product of bandwidth and delay is the number of bits that
can fill the link. This estimation is significant in the event that we have to send
data in bursts and wait for the acknowledgment of each burst before sending the
following one. To utilize the maximum ability of the link, we have to make the
size of our burst twice the product of bandwidth and delay. Also, we need to fill
up the full-duplex channel. The sender ought to send a burst of data of
(2*bandwidth*delay) bits. The sender at that point waits for the receiver’s
acknowledgement for part of the burst before sending another burst. The amount:
2*bandwidth*delay is the number of bits that can be in transition at any time.

THROUGHPUT

Throughput is the number of messages successfully transmitted per unit time. It


is controlled by available bandwidth, the available signal-to-noise ratio, and
hardware limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in everyday
consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as the
same, yet they are different. Bandwidth is the potential measurement of a link,
whereas throughput is an actual measurement of how fast we can send data.
Throughput is measured by tabulating the amount of data transferred between
multiple locations during a specific period of time, usually resulting in the unit
of bits per second(bps), which has evolved to bytes per second(Bps), kilobytes
per second(KBps), megabytes per second(MBps) and gigabytes per
second(Gbps). Throughput may be affected by numerous factors, such as the
hindrance of the underlying analog physical medium, the available processing
power of the system components, and end-user behavior. When numerous
protocol expenses are taken into account, the use rate of the transferred data can
be significantly lower than the maximum achievable throughput.
Let us consider: A highway that has a capacity of moving, say, 200 vehicles at a
time. But at a random time, someone notices only, say, 150 vehicles moving
through it due to some congestion on the road. As a result, the capacity is likely
to be 200 vehicles per unit time and the throughput is 150 vehicles at a time.
Example:
Input: A network with bandwidth of 10 Mbps can pass only an average of 12,
000 frames
per minute where each frame carries an average of 10, 000 bits. What will
be the
throughput for this network?

Output: We can calculate the throughput as-


Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.
For more, refer to the Difference between Bandwidth and Throughput.

JITTER

Jitter is another performance issue related to the delay. In technical terms, jitter
is a “packet delay variance”. It can simply mean that jitter is considered a
problem when different packets of data face different delays in a network and the
data at the receiver application is time-sensitive, i.e. audio or video data. Jitter is
measured in milliseconds(ms). It is defined as an interference in the normal order
of sending data packets. For example: if the delay for the first packet is 10 ms,
for the second is 35 ms, and for the third is 50 ms, then the real-time destination
application that uses the packets experiences jitter.
Simply, a jitter is any deviation in or displacement of, the signal pulses in a high-
frequency digital signal. The deviation can be in connection with the amplitude,
the width of the signal pulse, or the phase timing. The major causes of jitter are
electromagnetic interference(EMI) and crosstalk between signals. Jitter can lead
to the flickering of a display screen, affects the capability of a processor in a
desktop or server to proceed as expected, introduce clicks or other undesired
impacts in audio signals, and loss of transmitted data between network devices.
Jitter is harmful and causes network congestion and packet loss.
 Congestion is like a traffic jam on the highway. Cars cannot move forward at
a reasonable speed in a traffic jam. Like a traffic jam, in congestion, all the
packets come to a junction at the same time. Nothing can get loaded.
 The second negative effect is packet loss. When packets arrive at unexpected
intervals, the receiving system is not able to process the information, which
leads to missing information also called “packet loss”. This has negative
effects on video viewing. If a video becomes pixelated and is skipping, the
network is experiencing a jitter. The result of the jitter is packet loss. When
you are playing a game online, the effect of packet loss can be that a player
begins moving around on the screen randomly. Even worse, the game goes
from one scene to the next, skipping over part of the gameplay.

Jitter

In the above image, it can be noticed that the time it takes for packets to be sent
is not the same as the time in which they will arrive at the receiver side. One of
the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route traffic
along the most stable paths or to always pick the path that can come closest to
the targeted packet delivery rate.
Factors Affecting Network Performance
Below mentioned are the factors that affect the network performance.
 Network Infrastrucutre
 Applications used in the Network
 Network Issues
 Network Security

Network Infrastructure

Network Infrastructure is one of the factors that affect network performance.


Network Infrastructure consists of routers, switches services of a network like IP
Addressing, wireless protocols, etc., and these factors directly affect the
performance of the network.

Applications Used in the Network

Applications that are used in the Network can also have an impact on the
performance of the network as some applications that have poor performance can
take large bandwidth, for more complicated applications, its maintenance is also
important and therefore it impacts the performance of the network.

Network Issues

Network Issue is a factor in Network Performance as the flaws or loopholes in


these issues can lead to many systemic issues. Hardware issues can also impact
the performance of the network.
Network Security

Network Security provides privacy, data integrity, etc. Performance can be


influenced by taking network bandwidthwhich has the work of managing the
scanning of devices, encryption of data, etc. But these cases negatively influence
the network.
FAQs

1. How is the network performance measured?

Answer:
Network Performance is measured in two ways: Bandwidth and Latency.

2. What are the parameters to measure network performance?

Answer:
There are five parameters to measure network performance.
 Bandwidth
 Throughput
 Latency
 Bandwidth Delay
 Jitter
PROVIDED TRANSPORT LAYER

The services provided by the transport layer are similar to those of the data link
layer. The data link layer provides the services within a single network while the
transport layer provides the services across an internetwork made up of many
networks. The data link layer controls the physical layer while the transport layer
controls all the lower layers.

The services provided by the transport layer protocols can be divided into five
categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing

End-to-end delivery:

The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the
destination.

Reliable delivery:

The transport layer provides reliability services by retransmitting the lost and
damaged packets.
The reliable delivery has four aspects:

o Error control
o Sequence control
o Loss control
o Duplication control

Error Control

o The primary role of reliability is Error Control. In reality, no transmission


will be 100 percent error-free delivery. Therefore, transport layer protocols
are designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it
ensures only node-to-node error-free delivery. However, node-to-node
reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is
introduced inside one of the routers, then this error will not be caught by the
data link layer. It only detects those errors that have been introduced
between the beginning and end of the link. Therefore, the transport layer
performs the checking for the errors end-to-end to ensure that the packet has
arrived correctly.
Sequence Control

o The second aspect of the reliability is sequence control which is


implemented at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that the
packets received from the upper layers can be used by the lower layers. On
the receiving end, it ensures that the various segments of a transmission can
be correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to
identify the missing segment.
Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer


guarantees that no duplicate data arrive at the destination. Sequence numbers are
used to identify the lost packets; similarly, it allows the receiver to identify and
discard duplicate segments.

Flow Control

Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets
and asking for the retransmission of packets. This increases network congestion
and thus, reducing the system performance. The transport layer is responsible for
flow control. It uses the sliding window protocol that makes the data transmission
more efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather than frame
oriented.

Multiplexing

The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means multiple transport layer


connections use the same network connection. To make more cost-effective,
the transport layer sends several transmissions bound for the same
destination along the same path; this is achieved through upward
multiplexing.
o Downward multiplexing: Downward multiplexing means one transport
layer connection uses the multiple network connections. Downward
multiplexing allows the transport layer to split a connection among several
paths to improve the throughput. This type of multiplexing is used when
networks have a low or slow capacity.
Addressing

o According to the layered model, the transport layer interacts with the
functions of the session layer. Many protocols combine session,
presentation, and application layer protocols into a single layer known as the
application layer. In these cases, delivery to the session layer means the
delivery to the application layer. Data generated by an application on one
machine must be transmitted to the correct application on another machine.
In this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station
or port. The port variable represents a particular TS user of a specified
station known as a Transport Service access point (TSAP). Each station has
only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.
IMPLEMENTATION OF CONNECTIONLESS SERVICES

A Connectionless Service is technique that is used in data


communications to send or transfer data or message at Layer 4 i.e., Transport
Layer of Open System Interconnection model. This service does not require
session connection among sender or source and receiver or destination. Sender
starts transferring or sending data or messages to destination.
In other words, we can say that connectionless service simply means that node
can transfer or send data packets or messages to its receiver even without session
connection to receiver. Message is sent or transferred without prior arrangement.
This usually works due to error handling protocols that allow and give
permission for correction of errors just like requesting retransmission.
In this service, network sends each packet of data to sender one at a time,
independently of other packets. But network does not have any state information
to determine or identify whether packet is part of stream of other packets. Even
the network doesn’t have any knowledge and information about amount of traffic
that will be transferred by user. In this, each of data packets has source or
destination address and is routed independently from source to destination.
Therefore, data packets or messages might follow different paths to reach
destination. Data packets are also called datagrams. It is also similar to that of
postal services, as it also carries full address of destination where message is to
send. Data is also sent in one direction from source to destination without
checking that destination is still present there or not or if receiver or destination
is prepared to accept message.
Connectionless Protocols :
These protocols simply allow data to be transferred without any link among
processes. Some Of data packets may also be lost during transmission. Some of
protocols for connectionless services are given below:
 Internet Protocol (IP) –
This protocol is connectionless. In this protocol, all packets in IP network are
routed independently. They might not go through same route.

 User Datagram Protocol (UDP) –


This protocol does not establish any connection before transferring data. It
just sends data that’s why UDP is known as connectionless.

 Internet Control Message Protocol (ICMP) –


ICMP is called connectionless simply because it does not need any hosts to
handshake before establishing any connection.

 Internetwork Packet Exchange (IPX) –


IPX is called connectionless as it doesn’t need any consistent connection that
is required to be maintained while data packets or messages are being
transferred from one system to another.

Types of Connectionless Services :


Service Example

Unreliable
Electronic Junk Mail, etc.
Datagram

Acknowledged Registered mail, text messages


Datagram along with delivery report, etc.
Request Reply Queries from remote databases, etc.

Advantages :
 It is very fast and also allows for multicast and broadcast operations in which
similar data are transferred to various recipients in a single transmission.
 The effect of any error occurred can be reduced by implementing error-
correcting within an application protocol.
 This service is very easy and simple and is also low overhead.
 At the network layer, host software is very much simpler.
 No authentication is required in this service.
 Some of the application doesn’t even require sequential delivery of packets
or data. Examples include packet voice, etc.
Disadvantages :
 This service is less reliable as compared to connection-oriented service.
 It does not guarantee that there will be no loss, or error occurrence,
misdelivery, duplication, or out-of-sequence delivery of the packet.
 They are more prone towards network congestions.

IMPLEMENTATION OF CONNECTION-ORIENTED SERVICES

We need a virtual-circuit subnet for connection-oriented service. Virtual circuits


were designed to avoid having to choose a new route for every packet sent.

Instead, a route from the source machine to the destination machine is chosen as
part of the connection setup and stored in tables inside the routers, when a
connection is established. That route is utilised for all traffic flowing over the
connection, and exactly the same manner even telephone works.

The virtual circuit is also terminated, when the connection is released. In


connection-oriented service, every packet will have an identifier which tells to
which virtual circuit it belongs to.

The implementation of connection-oriented services is diagrammatically


represented as follows −
Example

Consider the scenario as mentioned in the above figure.

Step 1 − Host H1 has established connection 1 with host H2, which is remembered
as the first entry in every routing table.

Step 2 − The first line of A’s infers when packet is having connection identifier 1
is coming from host H1 and has to be sent to router W and given connection
identifier as 1.

Step 3 − Similarly, the first entry at W routes the packet to Y, also with connection
identifier 1.

Step 4 − If H3 also wants to establish a connection to H2 then it chooses


connection identifier 1 and tells the subnet to establish the virtual circuit. This will
be appearing in the second row in the table.

Step 5 − Note that we have a conflict here because although we can easily
distinguish connection 1 packets from H1 from connection 1 packet from H3, W
cannot do this.
Step 6 − For this reason, we assign a different connection identifier to the outgoing
traffic for the second connection. Avoiding conflicts of this kind is why routers
need the ability to replace connection identifiers in outgoing packets. In some
contexts, this is called label switching.

Operations :
There is a sequence of operations that are needed to b followed by users. These
operations are given below :
1. Establishing Connection –
It generally requires a session connection to be established just before any
data is transported or sent with a direct physical connection among sessions.
1. Transferring Data or Message –
When this session connection is established, then we transfer or send
message or data.
2. Releasing the Connection –
After sending or transferring data, we release connection.
Different Ways :
There are two ways in which connection-oriented services can be done. These
ways are given below :
1. Circuit-Switched Connection –
Circuit-switching networks or connections are generally known as
connection-oriented networks. In this connection, a dedicated route is being
established among sender and receiver, and whole data or message is sent
through it. A dedicated physical route or a path or a circuit is established
among all communication nodes, and after that, data stream or message is
sent or transferred.
2. Virtual Circuit-Switched Connection –
Virtual Circuit-Switched Connection or Virtual Circuit Switching is also
known as Connection-Oriented Switching. In this connection, a preplanned
route or path is established before data or messages are transferred or sent.
The message Is transferred over this network is such a way that it seems to
user that there is a dedicated route or path from source or sender to
destination or receiver.
Types of Connection-Oriented Service :
Service Example
Reliable Message
Sequence of pages, etc.
Stream

Reliable Byte Stream Song Download, etc.

VoIP (Voice Over Internet


Unreliable Connection
Protocol)

Advantages :
 It kindly support for quality of service is an easy way.
 This connection is more reliable than connectionless service.
 Long and large messages can be divided into various smaller messages so
that it can fit inside packets.
 Problems or issues that are related to duplicate data packets are made less
severe.
Disadvantages :
In this connection, cost is fixed no matter how traffic is.
It is necessary to have resource allocation before communication.
If any route or path failures or network congestions arise, there is no alternative
way available to continue communication.
COMPARISON OF VIRTUAL CIRCUIT AND DATA GRAM SUBNETS

Computer networks that provide connection-oriented services are called Virtual


Circuits while those providing connection-less services are called Datagram
networks. For prior knowledge, the Internet which we use is actually based on a
Datagram network (connection-less) at the network level as all packets from a
source to a destination do not follow the same path.
Let us see what are the highlighting differences between these two hot debated
topics here:

Criteria Virtual Circuit Networks Datagram Networks

Prior to data transmission, a


Connection connection is established No connection setup is
Establishment between sender and required.
receiver.

Routing decisions are made


Routing decisions are made
once during connection
independently for each packet
Routing setup and remain fixed
and can vary based on network
throughout the duration of
conditions.
the connection.

Uses explicit flow control, Uses implicit flow control,


where the sender adjusts its where the sender assumes a
Flow Control rate of transmission based certain level of available
on feedback from the bandwidth and sends packets
receiver. accordingly.

Uses network-assisted
Uses end-to-end congestion
congestion control, where
control, where the sender
Congestion routers monitor network
adjusts its rate of
Control conditions and may drop
transmission based on
packets or send congestion
feedback from the network.
signals to the sender.
Provides reliable delivery of
Provides unreliable delivery of
packets by detecting and
Error Control packets and does not guarantee
retransmitting lost or
delivery or correctness.
corrupted packets.

Requires more overhead per


Requires less overhead per
packet because each packet
packet because connection
Overhead contains information about its
setup and state maintenance
destination address and other
are done only once.
routing information.

Example
ATM, Frame Relay IP (Internet Protocol)
Protocol

Virtual Circuits:
1. It is connection-oriented, meaning that there is a reservation of resources like
buffers, CPU, bandwidth, etc. for the time in which the newly setup VC is
going to be used by a data transfer session.
2. The first sent packet reserves resources at each server along the path.
Subsequent packets will follow the same path as the first sent packet for the
connection time.
3. Since all the packets are going to follow the same path, a global header is
required. Only the first packet of the connection requires a global header, the
remaining packets generally don’t require global headers.
4. Since all packets follow a specific path, packets are received in order at the
destination.
5. Virtual Circuit Switching ensures that all packets successfully reach the
Destination. No packet will be discarded due to the unavailability of
resources.
6. From the above points, it can be concluded that Virtual Circuits are a highly
reliable method of data transfer.
7. The issue with virtual circuits is that each time a new connection is set up,
resources and extra information have to be reserved at every router along the
path, which becomes problematic if many clients are trying to reserve a
router’s resources simultaneously.
8. It is used by the ATM (Asynchronous Transfer Mode) Network, specifically
for Telephone calls.
Datagram Networks :
1. It is a connection-less service. There is no need for reservation of resources
as there is no dedicated path for a connection session.
2. All packets are free to use any available path. As a result, intermediate
routers calculate routes on the go due to dynamically changing routing tables
on routers.
3. Since every packet is free to choose any path, all packets must be associated
with a header with proper information about the source and the upper layer
data.
4. The connection-less property makes data packets reach the destination in any
order, which means that they can potentially be received out of order at the
receiver’s end.
5. Datagram networks are not as reliable as Virtual Circuits.
6. The major drawback of Datagram Packet switching is that a packet can only
be forwarded if resources such as the buffer, CPU, and bandwidth are
available. Otherwise, the packet will be discarded.
7. But it is always easy and cost-efficient to implement datagram networks as
there is no extra headache of reserving resources and making a dedicated
each time an application has to communicate.
8. It is generally used by the IP network, which is used for Data services like
the Internet.
IPV4 ADDRESS

IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4
was the primary version brought into action for production within the
ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in decimal
notation.
Example- 192.0.2.126 could be an IPv4 address.
Parts of IPv4
 Network part:
The network part indicates the distinctive variety that’s appointed to the
network. The network part conjointly identifies the category of the network
that’s assigned.
 Host Part:
The host part uniquely identifies the machine on your network. This part of
the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host
half must vary.
 Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive
numbers of hosts are divided into subnets and subnet numbers are appointed
to that.
Characteristics of IPv4
 IPv4 could be a 32-Bit IP Address.
 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header field is
twenty.
 It has Unicast, broadcast, and multicast style of addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causing host.
Advantages of IPv4
 IPv4 security permits encryption to keep up privacy and security.
 IPV4 network allocation is significant and presently has quite 85000
practical routers.
 It becomes easy to attach multiple devices across an outsized network while
not NAT.
 This is a model of communication so provides quality service also as
economical knowledge transfer.
 IPV4 addresses are redefined and permit flawless encoding.
 Routing is a lot of scalable and economical as a result of addressing is
collective more effectively.
 Data communication across the network becomes a lot of specific in
multicast organizations.
 Limits net growth for existing users and hinders the use of the net
for brand new users.
 Internet Routing is inefficient in IPv4.
 IPv4 has high System Management prices and it’s labor-intensive,
complex, slow & frequent to errors.
 Security features are nonobligatory.
 Difficult to feature support for future desires as a result of adding it
on is extremely high overhead since it hinders the flexibility to
attach everything over IP.
Limitations of IPv4
 IP relies on network layer addresses to identify end-points on network, and
each network has a unique IP address.
 The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
 If there are multiple host, we need IP addresses of next class.
 Complex host and routing configuration, non-hierarchical addressing,
difficult to re-numbering addresses, large routing tables, non-trivial
implementations in providing security, QoS (Quality of Service), mobility
and multi-homing, multicasting etc. are the big limitation of IPv4 so that’s
why IPv6 came into the picture.

FORWARDING OF IP PACKETS

The process of packet forwarding simply implies the forwarding of incoming


packets to their intended destination.
 Internet is made up of generally two terms- Interconnection and Network.
So, it is a connection to a large collection of networks. A packet that is to be
forwarded may be associated with the same network as the source host or
may belong to a destination host in a different network. Thus, it depends on
the destination how much a packet may need to travel before arriving at its
destination.
 The router is responsible for the process of packet forwarding. It accepts the
packet from the origin host or another router in the packet’s path and places
it on the route leading to the target host.
 The routing table is maintained by the router which is used for deciding the
packet forwarding.

Packet Forwarding in Router:

Routers are used on the network for forwarding a packet from the local network
to the remote network. So, the process of routing involves the packet forwarding
from an entry interface out to an exit interface.

Working:

The following steps are included in the packet forwarding in the router-
 The router takes the arriving packet from an entry interface and then
forwards that packet to another interface.
 The router needs to select the best possible interface for the packet to reach
the intended destination as there exist multiple interfaces in the router.
 The forwarding decision is made by the router based on routing table entries.
The entries in the routing table comprise destination networks and exit
interfaces to which the packet is to be forwarded.
 The selection of exit interface relies on- firstly, the interface must lead to the
target network to which the packet is intended to send, and secondly, it must
be the best possible path leading to the destination network.
Packet Forwarding Techniques:

Following are the packet forwarding techniques based on the destination host:
 Next-Hop Method: By only maintaining the details of the next hop or next
router in the packet’s path, the next-hop approach reduces the size of the
routing table. The routing table maintained using this method does not have
the information regarding the whole route that the packet must take.
 Network-Specific Method: In this method, the entries are not made for all
of the destination hosts in the router’s network. Rather, the entry is made of
the destination networks that are connected to the router.
 Host-Specific Method: In this method, the routing table has the entries for
all of the destination hosts in the destination network. With the increase in
the size of the routing table, the efficiency of the routing table decreases. It
finds its application in the process of verification of route and security
purposes.
 Default Method: Let’s assume- A host in network N1 is connected to two
routers, one of which (router R1) is connected to network N2 and the other
router R2 to the rest of the internet. As a result, the routing table only has
one default entry for the router R2.
INTERNET PROTOCOL

Internet Protocols are a set of rules that governs the communication and
exchange of data over the internet. Both the sender and receiver should follow
the same protocols in order to communicate the data. In order to understand it
better, let’s take an example of a language. Any language has its own set of
vocabulary and grammar which we need to know if we want to communicate in
that language. Similarly, over the internet whenever we access a website or
exchange some data with another device then these processes are governed by a
set of rules called the internet protocols.
Working of Internet Protocol
The internet and many other data networks work by organizing data into small
pieces called packets. Each large data sent between two network devices is
divided into smaller packets by the underlying hardware and software. Each
network protocol defines the rules for how its data packets must be organized in
specific ways according to the protocols the network supports.
Need of Protocols
It may be that the sender and receiver of data are parts of different networks,
located in different parts of the world having different data transfer rates. So, we
need protocols to manage the flow control of data, and access control of the link
being shared in the communication channel. Suppose there is a sender X who has
a data transmission rate of 10 Mbps. And, there is a receiver Y who has a data
receiving rate of 5Mbps. Since the rate of receiving the data is slow so some data
will be lost during transmission. In order to avoid this, receiver Y needs to
inform sender X about the speed mismatch so that sender X can adjust its
transmission rate. Similarly, the access control decides the node which will
access the link shared in the communication channel at a particular instant in
time. If not the transmitted data will collide if many computers send data
simultaneously through the same link resulting in the corruption or loss of data.
What is IP Addressing?
An IP address represents an Internet Protocol address. A unique address that
identifies the device over the network. It is almost like a set of rules governing
the structure of data sent over the Internet or through a local network. An IP
address helps the Internet to distinguish between different routers, computers,
and websites. It serves as a specific machine identifier in a specific network and
helps to improve visual communication between source and destination.
ICMPV4

Internet Control Message Protocol version 4 (ICMPv4)

1. Introduction
2. ICMPv4 Message Format
3. Types of ICMPv4 Messages
4. Key Takeaways

Introduction

As we are aware the IPv4 protocol doesn’t have any mechanism to report errors or
correct errors. So, IP functions in assistance with ICMP, report errors. ICMP
never gets involved in correcting the errors. Higher-level protocols take care of
correcting errors. Every time, ICMPv4 deliver an error report to the original source
of the datagram.

There can be several reasons behind reporting the error like:

 A router with a datagram for a host in another network, may not find the next hop
(router) to the final destination host.
 Datagram’s time-to-live field has become zero.
 There may be ambiguity in the header of the IP datagram.
 It may happen that all the fragments of the datagram do not arrive within a time
limit to the destination host.

And there can be several reasons to report the error.

Though ICMP is a network layer protocol, its messages are not passed to the lower
layer (i.e. data link layer). Initially, the IP datagram encapsulates ICMP messages
and then they are passed to the lower layer.

ICMPv4 Message Format

Below we have the message format for the ICMPv4 message. It has an 8-byte
header and apart from this, it has a variable size data section. Though the header
format gets changed for each type of message. Still, the first 4 bytes of each
message remains the same.
Among these first 4 bytes, the first byte describes the ‘type‘ of the message.
The second byte clarifies the reason behind the ‘type’ of the message. The
next two bytes define the checksum field of the message.

The rest 4 bytes defines the rest of the header which is specific for each message
type. The data section varies according to the type of message. The error
reporting message’s data section holds the information to identify the original
datagram that has an error. The data section of the query message holds more
information regarding the type of query.

Types of ICMPv4 Messages

The types of ICMPv4 messages as:

1. Error Reporting Messages


2. Query Messages
Error Reporting Messages

The most important function of ICMPv4 is to report the error. Although it is not
responsible to correct the errors. The higher-level protocols take the responsibility
of correcting the errors.

ICMPv4 always send the error report to the original source of the datagram. As
the datagram has only two addresses in its header:

1. Source address
2. Destination address.

So, ICMPv4 uses the source address for reporting the error.

There are some important characteristics of the ICMPv4 message:

 ICMPv4 error messages are not generated in response to ICMP error messages. As
this can create infinite repetition.
 The error message does not generate for the fragmented datagram. If the fragment
is not the first fragment.
 ICMPv4 error message is not generated for the datagram having the special
address, 127.0.0.0 or 0.0.0.0.
 This message is not generated for the datagrams with the broadcast address or a
multicast address in its destination field.

ICMPv4 Error Reporting Messages are further classified as:


1. Destination Unreachable
2. Source Quench
3. Time Exceeded
4. Parameter Problems
5. Redirection

Destination Unreachable

Consider if a host or a router is unable to deliver or route the datagram. Then they
discard the datagram. And send a destination unreachable error message to the
original source host.

Refer to the image above, you will observe that the Type section of the destination
unreachable error message is ‘3’. The Code section defines the reasons for
discarding the message. For destination, unreachable message code ranges from 0-
15.
Unreachable Messages Codes

1. Code 0 – Network unreachable. There is the possibility of hardware failure.


2. Code 1 – Destination host unreachable. There is the possibility of hardware failure.
3. Code 2 – Protocol unreachable. This means the protocol may not be running for
which the datagram is destined.
4. Code 3 – Port unreachable. This means. the process (application program) for
which the datagram is destined may not be running.
5. Code 4 – If the sender has specified not to fragment datagram. But routing is
impossible without fragmentation.
6. Code 5 – Unable to accomplish source routing. This means one or more routers
defined in the source routing option is unreachable.
7. Code 6 – The router has no information regarding the destination host network.
8. Code 7 – The router doesn’t have any information about the existence of the
destination host. It is difficult to identify whether the destination host exists or not.
9. Code 8 – The originating source host is isolated.
10. Code 9 – Unable to communicate with the destination network. Due to
administration prohibition.
11. Code 10 – Unable to communicate with the destination host. Due to administration
prohibition.
12. Code 11 – For the specified service. The network is unreachable.
13. Code 12 – For the specified service. The host is unreachable.
14. Code 13 – The destination host is unreachable. As the administrator has put a filter
over it.
15. Code 14 – Due to violation of host precedence, the host is unreachable.
16. Code 15 – Host is unreachable as its precedence was cut off.

The destination host generates the destination unreachable error message with the
code as 2 or 3 . And the router generates a message with the rest of the codes.

Source Quench

The source quench error message informs the source that the datagram has
been discarded. Due to congestion in the router or destination host.
Time Exceeded

Every datagram has a field ‘time-to-live’, which decrements by 1 every time it


visits a router. There can be two reasons to send the time exceeded message to the
source host which is defined by code 0 and code 1.

1. Code 0 – When this time-to-live field decrements to zero the router discards the
datagram. And send a time exceeded error message to the originating source of the
datagram.
2. Code 1 – If the destination host doesn’t receive all the fragments of a datagram in
a set time. Then it discards all the fragments and sends a time exceeded error
message to the source host.
Parameter Problem

If the destination host or the router find any ambiguity in the header of the IP
datagram. Then they discard the datagram. And send a parameter problem error
message to the originating source host of the datagram.

1. Code 0 defines that there is ambiguity in the header field of the datagram. And the
pointer field’s value points to the byte of the datagram header, which has a
problem.
2. Code 1 defines that the required part of the header is missing. Here, the pointer
field is not used.

Redirection

A router sends a redirection message to the localhost in the same network to update
its routing table. The router here does not discard the received datagram. Instead, it
forwards it to the appropriate router.
1. The message with this code 0 redirects for the network-specific route.
2. The message with this code 1 redirects for the host-specific route.
3. However the message with this code 2 redirects for the network-specific route for a
specific type of service.
4. And the message with this code 3 redirects for the host-specific route for a
specific type of service.

Query Messages

Query messages are for identifying network problems. Earlier there were five
query messages among which three are deprecated. The two query messages that
are being used today are:
Echo request and reply

When echo request and reply messages are exchanged from one host or a router to
another host or a router. It confirms that the two hosts or routers can communicate
with each other.

If a host or a router wants to communicate with another host or a router. Then it


sends the echo request message to the corresponding host or router with which it
wants to communicate. The host or router receiving the echo request message
prepares an echo reply message. And send it to the original sender confirming that
it is ready to communicate.

Timestamp request and reply

Timestamp request and reply messages calculate the round trip time. It is the time
required by an IP datagram to travel between two hosts or routers. This pair of
messages are also used for synchronizing the clocks of two machines (hosts or
routers).
Key Takeaways

 ICMPv4 protocol is a network layer protocol.


 This protocol is an error reporting protocol. And it reports an error that occurs
while IP datagram travels from the source host to the destination host.
 ICMPv4 is a message-oriented protocol that is used in assistance with IP protocol
as IP protocol lack error reporting.
 Ip datagram encapsulates ICMPv4 message before passing it to datalink layer.
 It is only responsible for reporting the error. ICMPv4 protocol doesn’t correct the
error.
 ICMPv4 is classified into two types of error messages and query messageswhich
are also further classified as you can see above.

In version 6 of the TCP/IP protocol suite ICMPv4 is also revised with addition to
version 6 of ICMPv6. The two internet debugging tools that utilize ICMPv4
are ping and traceroute.
MOBILE INTERNET PROTOCOL (OR MOBILE IP)
Mobile IP is a communication protocol (created by extending Internet
Protocol, IP) that allows the users to move from one network to another with the
same IP address. It ensures that the communication will continue without the
user’s sessions or connections being dropped.
Terminologies:
1. Mobile Node (MN) is the hand-held communication device that the user
carries e.g. Cell phone.
2. Home Network is a network to which the mobile node originally belongs as
per its assigned IP address (home address).
3. Home Agent (HA) is a router in-home network to which the mobile node
was originally connected
4. Home Address is the permanent IP address assigned to the mobile node
(within its home network).
5. Foreign Network is the current network to which the mobile node is visiting
(away from its home network).
6. Foreign Agent (FA) is a router in a foreign network to which the mobile
node is currently connected. The packets from the home agent are sent to the
foreign agent which delivers them to the mobile node.
7. Correspondent Node (CN) is a device on the internet communicating to the
mobile node.
8. Care-of Address (COA) is the temporary address used by a mobile node
while it is moving away from its home network.
9. Foreign agent COA, the COA could be located at the FA, i.e., the COA is
an IP address of the FA. The FA is the tunnel end-point and forwards packets
to the MN. Many MN using the FA can share this COA as a common COA.
10. Co-located COA, the COA is co-located if the MN temporarily acquired an
additional IP address which acts as COA. This address is now topologically
correct, and the tunnel endpoint is at the MN. Co-located addresses can be
acquired using services such as DHCP.
Mobile IP

Working:
The correspondent node sends the data to the mobile node. Data packets contain
the correspondent node’s address (Source) and home address (Destination).
Packets reach the home agent. But now mobile node is not in the home network,
it has moved into the foreign network. The foreign agent sends the care-of-
address to the home agent to which all the packets should be sent. Now, a tunnel
will be established between the home agent and the foreign agent by the process
of tunneling.
Tunneling establishes a virtual pipe for the packets available between a tunnel
entry and an endpoint. It is the process of sending a packet via a tunnel and it is
achieved by a mechanism called encapsulation.
Now, the home agent encapsulates the data packets into new packets in which
the source address is the home address and destination is the care-of-address and
sends it through the tunnel to the foreign agent. Foreign agent, on another side of
the tunnel, receives the data packets, decapsulates them, and sends them to the
mobile node. The mobile node in response to the data packets received sends a
reply in response to the foreign agent. The foreign agent directly sends the reply
to the correspondent node.
Key Mechanisms in Mobile IP:
1. Agent Discovery: Agents advertise their presence by periodically
broadcasting their agent advertisement messages. The mobile node receiving
the agent advertisement messages observes whether the message is from its
own home agent and determines whether it is in the home network or foreign
network.
2. Agent Registration: Mobile node after discovering the foreign agent sends a
registration request (RREQ) to the foreign agent. The foreign agent, in turn,
sends the registration request to the home agent with the care-of-address. The
home agent sends a registration reply (RREP) to the foreign agent. Then it
forwards the registration reply to the mobile node and completes the process
of registration.
3. Tunneling: It establishes a virtual pipe for the packets available between a
tunnel entry and an endpoint. It is the process of sending a packet via a
tunnel and it is achieved by a mechanism called encapsulation. It takes place
to forward an IP datagram from the home agent to the care-of-address.
Whenever the home agent receives a packet from the correspondent node, it
encapsulates the packet with source address as home address and destination
as care-of-address.
Route Optimization in Mobile IP:
The route optimization adds a conceptual data structure, the binding cache, to the
correspondent node. The binding cache contains bindings for the mobile node’s
home address and its current care-of-address. Every time the home agent
receives an IP datagram that is destined to a mobile node currently away from
the home network, it sends a binding update to the correspondent node to update
the information in the correspondent node’s binding cache. After this, the
correspondent node can directly tunnel packets to the mobile node. Mobile IP is
provided by the network providers.
Routing Algorithms–Distance Vector routing, Link State Routing, Path Vector Routing, Unicast
RoutingProtocol–Internet Structure, Routing Information Protocol, Open Source Path First, Border
GatewayProtocol V4, Broadcast routing, Multicasting routing, Multicasting Basics, Intradomain
Multicast Protocols, IGMP.
Routing Algorithms
Routing is the process of establishing the routes that data packets must follow
to reach the destination. In this process, a routing table is created which
contains information regarding routes that data packets follow. Various routing
algorithms are used for the purpose of deciding which route an incoming data
packet needs to be transmitted on to reach the destination efficiently.

Classification of Routing Algorithms


The routing algorithms can be classified as follows:
1. Adaptive Algorithms
2. Non-Adaptive Algorithms
3. Hybrid Algorithms
Types of Routing Algorithm

1. Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network
topology or traffic load changes. The changes in routing decisions are reflected
in the topology as well as the traffic of the network. Also known as dynamic
routing, these make use of dynamic information such as current topology, load,
delay, etc. to select routes. Optimization parameters are distance, number of
hops, and estimated transit time.
Further, these are classified as follows:
 Isolated: In this method each, node makes its routing decisions using the
information it has without seeking information from other nodes. The sending
nodes don’t have information about the status of a particular link. The
disadvantage is that packets may be sent through a congested network
which may result in delay. Examples: Hot potato routing, and backward
learning.

 Centralized: In this method, a centralized node has entire information about


the network and makes all the routing decisions. The advantage of this is
only one node is required to keep the information of the entire network and
the disadvantage is that if the central node goes down the entire network is
done. The link state algorithm is referred to as a centralized algorithm since
it is aware of the cost of each link in the network.
 Distributed: In this method, the node receives information from its
neighbors and then takes the decision about routing the packets. A
disadvantage is that the packet may be delayed if there is a change in
between intervals in which it receives information and sends packets. It is
also known as a decentralized algorithm as it computes the least-cost path
between source and destination.
2. Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions once they
have been selected. This is also known as static routing as a route to be taken
is computed in advance and downloaded to routers when a router is booted.
Further, these are classified as follows:
 Flooding: This adapts the technique in which every incoming packet is sent
on every outgoing line except from which it arrived. One problem with this is
that packets may go in a loop and as a result of which a node may receive
duplicate packets. These problems can be overcome with the help of
sequence numbers, hop count, and spanning trees.
 Random walk: In this method, packets are sent host by host or node by
node to one of its neighbors randomly. This is a highly robust method that is
usually implemented by sending packets onto the link which is least queued.

Random Walk

3. Hybrid Algorithms

As the name suggests, these algorithms are a combination of both adaptive and
non-adaptive algorithms. In this approach, the network is divided into several
regions, and each region uses a different algorithm.
Further, these are classified as follows:
 Link-state: In this method, each router creates a detailed and complete map
of the network which is then shared with all other routers. This allows for
more accurate and efficient routing decisions to be made.
 Distance vector: In this method, each router maintains a table that contains
information about the distance and direction to every other node in the
network. This table is then shared with other routers in the network. The
disadvantage of this method is that it may lead to routing loops.
Difference between Adaptive and Non-Adaptive
Routing Algorithms
The main difference between Adaptive and Non-Adaptive Algorithms is:
Adaptive Algorithms are the algorithms that change their routing decisions
whenever network topology or traffic load changes. It is called Dynamic
Routing. Adaptive Algorithm is used in a large amount of data, highly complex
network, and rerouting of data.
Non-Adaptive Algorithms are algorithms that do not change their routing
decisions once they have been selected. It is also called static Routing. Non-
Adaptive Algorithm is used in case of a small amount of data and a less
complex network.
For more differences, you can refer to Differences between Adaptive and Non-
Adaptive Routing Algorithms.
Difference between Routing and Flooding
The difference between Routing and Flooding is listed below:

Routing Flooding

A routing table is required. No Routing table is required.

May give the shortest path. Always gives the shortest path.

Less Reliable. More Reliable.

Traffic is less. Traffic is high.

No duplicate packets. Duplicate packets are present.

distance-vector routing
A distance-vector routing (DVR) protocol requires that a router inform its
neighbors of topology changes periodically. Historically known as the old
ARPANET routing algorithm (or known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table
containing the distance between itself and ALL possible destination nodes.
Distances,based on a chosen metric, are computed using information from the
neighbors’ distance vectors.
Information kept by DV router -
 Each router has an ID
 Associated with each link connected to a router,
 there is a link cost (static or dynamic).
 Intermediate hops

Distance Vector Table Initialization -


 Distance to itself = 0
 Distance to ALL other routers = infinity number.

Distance Vector Algorithm –


1. A router transmits its distance vector to each of its neighbors in a routing
packet.
2. Each router receives and saves the most recently received distance vector
from each of its neighbors.
3. A router recalculates its distance vector when:
 It receives a distance vector from a neighbor containing different
information than before.
 It discovers that a link to a neighbor has gone down.
The DV calculation is based on minimizing the cost to each destination
Dx(y) = Estimate of least cost from x to y
C(x,v) = Node x knows cost to each neighbor v
Dx = [Dx(y): y ? N ] = Node x maintains distance vector
Node x also maintains its neighbors' distance vectors
– For each neighbor v, x maintains Dv = [Dv(y): y ? N ]
Note –
 From time-to-time, each node sends its own distance vector estimate to
neighbors.
 When a node x receives new DV estimate from any neighbor v, it saves v’s
distance vector and it updates its own DV using B-F equation:
 Dx(y) = min { C(x,v) + Dv(y), Dx(y) } for each node y ? N
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have
their routing table. Every routing table will contain distance to the destination
nodes.

Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ? N
As we can see that distance will be less going from X to Z when Y is
intermediate node(hop) so it will be update in routing table X.
Similarly for Z also –

Finally the routing table for all –

Advantages of Distance Vector routing –


 It is simpler to configure and maintain than link state routing.
Disadvantages of Distance Vector routing –
 It is slower to converge than link state.
 It is at risk from the count-to-infinity problem.
 It creates more traffic than link state since a hop count change must
be propagated to all routers and processed on each router. Hop count
updates take place on a periodic basis, even if there are no changes in
the network topology, so bandwidth-wasting broadcasts still occur.
 For larger networks, distance vector routing results in larger routing
tables than link state since each router must know about all other
routers. This can also lead to congestion on WAN links.

Unicast means the transmission from a single sender to a single receiver. It is a point-to-point
communication between the sender and receiver. There are various unicast protocols such as
TCP, HTTP, etc.
 TCP is the most commonly used unicast protocol. It is a connection-oriented protocol that relies
on acknowledgment from the receiver side.
 HTTP stands for HyperText Transfer Protocol. It is an object-oriented protocol for
communication.

Unicast Routing and Link State Routing

Major Protocols of Unicast Routing


1. Distance Vector Routing: Distance-Vector routers use a distributed algorithm to compute their
routing tables.
2. Link-State Routing: Link-State routing uses link-state routers to exchange messages that
allow each router to learn the entire network topology.
3. Path-Vector Routing: It is a routing protocol that maintains the path that is updated
dynamically.
Link State Routing
Link state routing is the second family of routing protocols. While distance-vector routers use a
distributed algorithm to compute their routing tables, link-state routing uses link-state routers to
exchange messages that allow each router to learn the entire network topology. Based on this
learned topology, each router is then able to compute its routing table by using the shortest path
computation.
Link state routing is a technique in which each router shares the knowledge of its neighborhood
with every other router i.e. the internet work. The three keys to understand the link state routing
algorithm.
1. Knowledge about the neighborhood: Instead of sending its routing table, a router sends the
information about its neighborhood only. A router broadcast its identities and cost of the directly
attached links to other routers.
2. Flooding: Each router sends the information to every other router on the internetwork except
its neighbors. This process is known as flooding. Every router that receives the packet sends
the copies to all the neighbors. Finally each and every router receives a copy of the same
information.
3. Information Sharing: A router send the information to every other router only when the change
occurs in the information.
Link state routing has two phase:
1. Reliable Flooding: Initial state– Each node knows the cost of its neighbors. Final state- Each
node knows the entire graph.
2. Route Calculation: Each node uses Dijkstra’ s algorithm on the graph to calculate the optimal
routes to all nodes. The link state routing algorithm is also known as Dijkstra’s algorithm which
is used to find the shortest path from one node to every other node in the network.
Features of Link State Routing Protocols
 Link State Packet: A small packet that contains routing information.
 Link-State Database: A collection of information gathered from the link-state packet.
 Shortest Path First Algorithm (Dijkstra algorithm): A calculation performed on the database
results in the shortest path
 Routing Table: A list of known paths and interfaces.
Calculation of Shortest Path
To find the shortest path, each node needs to run the famous Dijkstra algorithm. Let us understand
how can we find the shortest path using an example.
Illustration
To understand the Dijkstra Algorithm, let’s take a graph and find the shortest path from the source
to all nodes.
Note: We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a
value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used to
store the shortest distance values of all vertices.
Consider the below graph and src = 0.

Shortest Path Calculation – Step 1

STEP 1: The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF,
INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with a minimum distance
value. The vertex 0 is picked and included in sptSet. So sptSet becomes {0}. After including 0 to
sptSet, update the distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7. The
distance values of 1 and 7 are updated as 4 and 8.
The following subgraph shows vertices and their distance values. Vertices included in SPT are
included in GREEN color.

Shortest Path Calculation – Step 2

STEP 2: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}. Update the
distance values of adjacent vertices of 1. The distance value of vertex 2 becomes 12.

Shortest Path Calculation – Step 3

STEP 3: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values of
adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).

Shortest Path Calculation – Step 4

STEP 4: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance values of
adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.

Shortest Path Calculation – Step 5

We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we get the
following Shortest Path Tree (SPT).

Shortest Path Calculation – Step 6

Characteristics of Link State Protocol


 It requires a large amount of memory.
 Shortest path computations require many CPU circles.
 If a network uses little bandwidth; it quickly reacts to topology changes
 All items in the database must be sent to neighbors to form link-state packets.
 All neighbors must be trusted in the topology.
 Authentication mechanisms can be used to avoid undesired adjacency and problems.
 No split horizon techniques are possible in the link-state routing.
 OSPF Protocol
Protocols of Link State Routing
1. Open Shortest Path First (OSPF)
2. Intermediate System to Intermediate System (IS-IS)

Structure of the Internet


1. Internet Address:
Computers connected to the internet means that the systems are connected to computers’
worldwide network. Therefore, each machine/device has its own or unique address. Addresses of
the internet are in the form “kkk.kkk.kkk.kkk,” where each “kkk” ranges from 0-256. This structure
of the internet address is known as an IP address (Internet Protocol). Fig. 1 describes the
connection between two computers using the internet. Both systems have unique IP addresses.
However, the internet is a unique object between both systems.
Figure 1: Computer Connection via Internet
If a client connects the computer with the internet using Internet Service Provider (ISP), the client’s
system is allocated a temporary internet protocol address till the session client is operating.
However, if someone becomes part of the internet using a local area network (LAN), the client is
probably assigned to a permanent internet protocol. At the time of connection, the system will have a
unique internet protocol address. A handy program is named “Ping” to ensure the internet
connection on the system; this provision is available on all the Microsoft Windows operating systems
and sometimes on a flavour of Unix OS.

2. Protocol Stacks and Packets


As the device is connected to the internet and retains a unique address. What is the procedure to
communicate the device with the system at another end? For the sake of understanding, we are
considering an example. As we discussed in Figure 1, one system retains an IP address, i.e.,
173.196.95.98, and the second system contains an IP address, i.e., 162.194.60.98. Suppose you
want to send a message “Hello Friend” to another computer via “Your computer”. The medium of
communication will be the wire that connects “Your computer” to the internet. Suppose you are using
ISP facilities, then the message will be communicated via the phone line of ISP. In such a case, the
first message will be encrypted in digital form. All the alphanumeric characters will be converted into
an electronic signal. The electronic signal will be delivered to the other computer and then again
decrypted into the original form as received on the second IP system. The convergence of
messages from alphanumeric form to a digital signal and vice versa is performed employing Protocol
Stack that is part of each operating system, i.e., Windows, Unix, Android, etc. The protocol stack
applied in the domain of the internet is known as TCP/IP, as it is the primary protocol used for
communication.

Fig. 2 briefly describes the path framework related to that message from “Your computer” to another
computer.

Figure 2: Communication Path Framework


1. The message that needs to be sent is written in an application on “Your computer” it starts
from the top using the protocol stack and moves downward.
2. If the message is large, the stack layer breaks the message into smaller chunks so that data
management remains stable. The chunks of data are known as Packets.
3. The data from the application layer move towards the TCP/IP layer. The packet of the data is
assigned with a port number. In computers, various types of message applications are
working at a time. Therefore, it is essential to know which application is sending the message
so that it needs to be synced at the reception level (another computer) with the same
application. Hence, the message will listen on the same port.
4. After necessary processing at the TCP level, the packets move towards the IP layer. The IP
layer provides the destination layer where the message should be received. At this level,
message packets retain port number as well as IP address.
5. The hardware layer is responsible for converting alpha/numeric messages into a digital
signal and sending the message through the telephone’s path.
6. Internet services provider (ISP) is also attached to the internet, where the ISP router
examines the recipient’s address. The next stop of the packet is another router.
7. Eventually, the packets reach another computer. This time packets start from the bottom.
8. As the packets move upwards, the packets’ unnecessary data is removed that was helping
to reach the destination; this includes IP address and port number.
9. On reaching the stack’s top, all the packets are reassembled to form the original message
sent by “Your computer”.
3. Infrastructure of the Network:
It is clear from the discussion about how two computers at different locations interact with each other
employing the internet. How do computers send messages? How are packets transmitted and
received at both systems? How an alphanumeric message is converted into a digital signal and vice
versa. Fig. 3 will now briefly explain what resides in between all the layers?

Fig. 3 is the brief description of Fig. 1, where physical connection via telephone to ISP is easy to
guess, but it also requires some explanation. The ISP manages a pool of modems; this is handled
by some device that contains data flow from the modem pool to a spine or specific line router
(usually a specialized one). This configuration can be linked to a port server, as it offers network
connectivity. Information on billing and use is typically obtained here as well.
When the packets cross the phone framework and nearby ISP equipment, it is redirected to the
ISP’s mainline or infrastructure from which the ISP purchases bandwidth. The packets typically pass
from here to many routers, backbones, unique lines, and other networks to reach the target, the
device with another computer’s address.

For the users of Microsoft Windows and operators of Unix flavour, if you have an internet
connection, you can trace the packets’ path by using an internet program known as traceroute. Just
like the PING command, users can check the packet path in the command prompt. Traceroute prints
all the routers, computer systems, and various other entities related to the internet from the packet
will travel and reach the actual destination. The internet routers decide further communication of
packets. Multiple packets are shown in Fig. 3; the real cause of such routers is to understand the
networking structures clearly.
The infrastructure of the internet:
The framework of the internet consists of multiple interconnected large networks. The large networks
we called Network Service Providers (NSPs). UUNet, IBM, CerfNet, SprintNet, PSINet are well-
known examples of NSPs. Packet traffic is exchanged among these peer networks. Each of the
NSPs needs to be connected to three Network Access Points (NAPs). In NAPs traffic, packets have
the provision to jump from one NSP to the backbone of another NSP. Metropolitan Area Exchange
(MAEs) are also interconnected utilizing NSPs. MAEs and NAPs have the same functionality;
however, MAEs are owned privately.

Routing Information Protocol (RIP)


Routing Information Protocol (RIP) is a dynamic routing protocol that uses hop count as a
routing metric to find the best path between the source and the destination network. It is a
distance-vector routing protocol that has an AD value of 120 and works on the Network layer of the
OSI model. RIP uses port number 520.
Hop Count
Hop count is the number of routers occurring in between the source and destination network. The
path with the lowest hop count is considered as the best route to reach a network and therefore
placed in the routing table. RIP prevents routing loops by limiting the number of hops allowed in a
path from source and destination. The maximum hop count allowed for RIP is 15 and a hop count
of 16 is considered as network unreachable.
Features of RIP
1. Updates of the network are exchanged periodically.
2. Updates (routing information) are always broadcast.
3. Full routing tables are sent in updates.
4. Routers always trust routing information received from neighbor routers. This is also known
as Routing on rumors.
RIP versions:
There are three versions of routing information protocol – RIP Version1, RIP Version2,
and RIPng.

RIP v1 RIP v2 RIPng

Sends update as broadcast Sends update as multicast Sends update as multicast

Broadcast at Multicast at 224.0.0.9 Multicast at FF02::9 (RIPng


RIP v1 RIP v2 RIPng

255.255.255.255 can only run on IPv6


networks)

Doesn’t support
Supports authentication of
authentication of updated –
RIPv2 update messages
messages

Classless protocol updated


Classful routing protocol Classless updates are sent
supports classful

RIP v1 is known as Classful Routing Protocol because it doesn’t send


information of subnet mask in its routing update.
RIP v2 is known as Classless Routing Protocol because it sends information of
subnet mask in its routing update.

>> Use debug command to get the details :


# debug ip rip
>> Use this command to show all routes configured in router, say for router R1
:
R1# show ip route
>> Use this command to show all protocols configured in router, say for router
R1 :
R1# show ip protocols
Configuration :

Consider the above-given topology which has 3-routers R1, R2, R3. R1 has IP
address 172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address
172.16.10.2/30 on s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address
172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0, 10.10.10.1/24 on fa0/0.
Configure RIP for R1 :
R1(config)# router rip
R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note: no auto-summary command disables the auto-summarisation. If we don’t
select any auto-summary, then the subnet mask will be considered as classful
in Version 1.
Configuring RIP for R2:
R2(config)# router rip
R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary
Similarly, Configure RIP for R3 :
R3(config)# router rip
R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary
RIP timers:
 Update timer: The default timing for routing information being exchanged by
the routers operating RIP is 30 seconds. Using an Update timer, the routers
exchange their routing table periodically.
 Invalid timer: If no update comes until 180 seconds, then the destination
router considers it invalid. In this scenario, the destination router mark hop
counts as 16 for that router.
 Hold down timer: This is the time for which the router waits for a neighbor
router to respond. If the router isn’t able to respond within a given time then
it is declared dead. It is 180 seconds by default.
 Flush time: It is the time after which the entry of the route will be flushed if it
doesn’t respond within the flush time. It is 60 seconds by default. This timer
starts after the route has been declared invalid and after 60 seconds i.e time
will be 180 + 60 = 240 seconds.
Note that all these times are adjustable. Use this command to change the
timers :
R1(config-router)# timers basic
R1(config-router)# timers basic 20 80 80 90
Normal utilization of RIP:
1. Small to medium-sized networks: RIP is normally utilized in little to
medium-sized networks that have moderately basic directing prerequisites. It
is not difficult to design and requires little support, which goes with it a
famous decision for little organizations.
2. Legacy organizations: RIP is as yet utilized in some heritage networks that
were set up before further developed steering conventions were created.
These organizations may not merit the expense and exertion of overhauling,
so they keep on involving RIP as their directing convention.
3. Lab conditions: RIP is much of the time utilized in lab conditions for testing
and learning purposes. A basic convention is not difficult to set up, which
pursues it a decent decision for instructive purposes.
4. Backup or repetitive steering: In certain organizations, RIP might be
utilized as a reinforcement or excess directing convention, on the off chance
that the essential steering convention falls flat or encounters issues. RIP isn’t
generally so productive as other directing conventions, however, it very well
may be helpful as a reinforcement if there should be an occurrence of crisis.
Advantages of RIP :
 Simplicity: RIP is a relatively simple protocol to configure and manage,
making it an ideal choice for small to medium-sized networks with limited
resources.
 Easy implementation: RIP is easy to implement, as it does not require
much technical expertise to set up and maintain.
 Convergence: RIP is known for its fast convergence time, meaning that it
can quickly adapt to changes in network topology and route packets
efficiently.
 Automatic updates: RIP automatically updates routing tables at regular
intervals, ensuring that the most up-to-date information is being used to
route packets.
 Low bandwidth overhead: RIP uses a relatively low amount of bandwidth
to exchange routing information, making it an ideal choice for networks with
limited bandwidth.
 Compatibility: RIP is compatible with many different types of routers and
network devices, making it easy to integrate into existing networks.
Disadvantages of RIP :
 Limited scalability: RIP has limited scalability, and it may not be the best
choice for larger networks with complex topologies. RIP can only support up
to 15 hops, which may not be sufficient for larger networks.
 Slow convergence: While RIP is known for its fast convergence time, it can
be slower to converge than other routing protocols. This can lead to delays
and inefficiencies in network performance.
 Routing loops: RIP can sometimes create routing loops, which can cause
network congestion and reduce overall network performance.
 Limited support for load balancing: RIP does not support sophisticated
load balancing, which can result in suboptimal routing paths and uneven
network traffic distribution.
 Security vulnerabilities: RIP does not provide any native security features,
making it vulnerable to attacks such as spoofing and tampering.
 Inefficient use of bandwidth: RIP uses a lot of bandwidth for periodic
updates, which can be inefficient in networks with limited bandwidth.
Open Shortest Path First (OSPF) protocol
States
Open Shortest Path First (OSPF) is a link-state routing protocol that is used to
find the best path between the source and the destination router using its own
Shortest Path First). OSPF is developed by Internet Engineering Task Force
(IETF) as one of the Interior Gateway Protocol (IGP), i.e, the protocol which
aims at moving the packet within a large autonomous system or routing domain.
It is a network layer protocol which works on protocol number 89 and uses AD
value 110. OSPF uses multicast address 224.0.0.5 for normal communication
and 224.0.0.6 for update to designated router(DR)/Backup Designated Router
(BDR).

OSPF Terms

1. Router Id – It is the highest active IP address present on the router. First,


the highest loopback address is considered. If no loopback is configured
then the highest active IP address on the interface of the router is
considered.
2. Router priority – It is an 8-bit value assigned to a router operating OSPF,
used to elect DR and BDR in a broadcast network.
3. Designated Router (DR) – It is elected to minimize the number of
adjacencies formed. DR distributes the LSAs to all the other routers. DR is
elected in a broadcast network to which all the other routers share their
DBD. In a broadcast network, the router requests for an update to DR, and
DR will respond to that request with an update.
4. Backup Designated Router (BDR) – BDR is a backup to DR in a broadcast
network. When DR goes down, BDR becomes DR and performs its
functions.
5. DR and BDR election – DR and BDR election takes place in the broadcast
network or multi-access network. Here are the criteria for the election:
 The router having the highest router priority will be declared as DR.
 If there is a tie in router priority then the highest router I’d be considered.
First, the highest loopback address is considered. If no loopback is
configured then the highest active IP address on the interface of the
router is considered.

OSPF States
The device operating OSPF goes through certain states. These states are:
 Down – In this state, no hello packets have been received on the interface.
 Note – The Downstate doesn’t mean that the interface is physically
down. Here, it means that the OSPF adjacency process has not
started yet.
 INIT – In this state, the hello packets have been received from the other
router.
 2WAY – In the 2WAY state, both the routers have received the hello packets
from other routers. Bidirectional connectivity has been established.
 Note – In between the 2WAY state and Exstart state, the DR and
BDR election takes place.
 Exstart – In this state, NULL DBD are exchanged. In this state, the master
and slave elections take place. The router having the higher router I’d
become the master while the other becomes the slave. This election decides
Which router will send its DBD first (routers who have formed neighbourship
will take part in this election).
 Exchange – In this state, the actual DBDs are exchanged.
 Loading – In this state, LSR, LSU, and LSA (Link State Acknowledgement)
are exchanged.
Important – When a router receives DBD from other router, it compares its
own DBD with the other router DBD. If the received DBD is more updated
than its own DBD then the router will send LSR to the other router stating
what links are needed. The other router replies with the LSU containing the
updates that are needed. In return to this, the router replies with the Link
State Acknowledgement.
 Full – In this state, synchronization of all the information takes place. OSPF
routing can begin only after the Full state.

Border Gateway Protocol (BGP)


used to Exchange routing information for the internet and is the protocol used
between ISP which are different ASes.
The protocol can connect together any internetwork of autonomous system
using an arbitrary topology. The only requirement is that each AS have at least
one router that is able to run BGP and that is router connect to at least one
other AS’s BGP router. BGP’s main function is to exchange network reach-
ability information with other BGP systems. Border Gateway Protocol constructs
an autonomous systems’ graph based on the information exchanged between
BGP routers.
Characteristics of Border Gateway Protocol (BGP):
 Inter-Autonomous System Configuration: The main role of BGP is to
provide communication between two autonomous systems.
 BGP supports Next-Hop Paradigm.
 Coordination among multiple BGP speakers within the AS (Autonomous
System).
 Path Information: BGP advertisement also include path information, along
with the reachable destination and next destination pair.
 Policy Support: BGP can implement policies that can be configured by the
administrator. For ex:- a router running BGP can be configured to distinguish
between the routes that are known within the AS and that which are known
from outside the AS.
 Runs Over TCP.
 BGP conserve network Bandwidth.
 BGP supports CIDR.
 BGP also supports Security.
Functionality of Border Gateway Protocol (BGP):
BGP peers performs 3 functions, which are given below.
1. The first function consist of initial peer acquisition and authentication. both
the peers established a TCP connection and perform message exchange
that guarantees both sides have agreed to communicate.
2. The second function mainly focus on sending negative or positive reach-
ability information.
3. The third function verifies that the peers and the network connection
between them are functioning correctly.
BGP Route Information Management Functions:
 Route Storage: Each BGP stores information about how to reach other
networks.
 Route Update: In this task, Special techniques are used to determine when
and how to use the information received from peers to properly update the
routes.
 Route Selection: Each BGP uses the information in its route databases to
select good routes to each network on the internet network.
 Route advertisement: Each BGP speaker regularly tells its peer what is
known about various networks and methods to reach them.

Broadcast Routing
Broadcast routing plays a role, in computer networking and
telecommunications. It involves transmitting data, messages, or signals from
one source to destinations within a network. Unlike routing (one-to-one
communication) or multicast routing (one-to-many communication) broadcast
routing ensures that information reaches all devices or nodes within the
network.
In this article, we will explore the world of broadcast routing in today’s era of
communication.
Broadcast

Mechanisms for Broadcast Routing


The mechanisms and protocols are employed to efficiently distribute data to
multiple recipients through broadcast routing. Here are some important
methods:
 Flooding: Flooding is an approach to broadcast routing. In this method, the
sender broadcasts the message to all connected devices, which then
forwards it to their connected devices and so on. This continues until the
message reaches all intended recipients or a predefined maximum number
of hops is reached. However flooding can lead to network congestion and
inefficiency.
 Spanning Tree Protocol (STP): STP is utilized in Ethernet networks to
prevent loops and ensure broadcast routing. It establishes a tree structure
that connects all devices, in the network while avoiding paths. Reducing
network congestion and avoiding broadcast messages are the benefits of
implementing this approach.
 The Internet Group Management Protocol (IGMP): It is a communication
protocol utilized in IP networks to facilitate the management of group
memberships. Its purpose is to enable hosts to join or leave groups ensuring
that only interested recipients receive the multicast traffic. This not enhances
network efficiency. Also prevents unnecessary data transmission.
 Broadcast Domains: Segmenting a network into broadcast domains also
known as subnetting is a way to manage and control the scope of broadcast
messages. By dividing a network into segments we can contain the impact of
broadcast traffic within each segment minimizing its overall effect, on the
entire network.
Importance of Broadcast Routing
The significance of broadcast routing in communication systems cannot be
overstated for reasons;
 Efficient Data Distribution: Broadcast routing ensures that information
reaches all intended recipients simultaneously. This efficiency makes it ideal
for applications such as live event broadcasting, software updates
distribution or emergency alerts.
 Scalability: It allows networks to expand without requiring routing
configurations. Adding devices to a network doesn’t necessitate changes
making it practical for large scale networks.
 Reliability: Broadcast routing provides communication redundancy. If one
path fails data can still be delivered through routes enhancing the reliability
of the network.
 Security: Although broadcast routing is commonly associated with
broadcasts it can also be employed in networks, for secure communications.
In cases encryption and access control mechanisms are utilized to ensure
data privacy.
Challenges and Solutions
While broadcast routing has its benefits it also presents challenges that require
attention;
 Broadcast Storms: Flooding based broadcast routing can result in
broadcast storms, where an overwhelming amount of traffic floods the
network leading to congestion and decreased efficiency. Network
administrators implement measures, like storm control and rate limiting to
tackle this issue.
 Security Concerns: Broadcasting information, over networks can pose
security risks. To address this encryption and authentication mechanisms
are commonly employed to safeguard data during transmission.
 Scalability: Managing broadcast traffic becomes more complex as networks
grow larger. To maintain scalability it is beneficial to implement broadcast
domains and utilize routing algorithms.
 Bandwidth Consumption: It is crucial to design and monitor the network to
prevent overload caused by broadcast traffic as it consumes network
bandwidth.
Multicasting
Multicast is a method of group communication where the sender sends data to
multiple receivers or nodes present in the network simultaneously. Multicasting
is a type of one-to-many and many-to-many communication as it allows sender
or senders to send data packets to multiple receivers at once across LANs or
WANs. This process helps in minimizing the data frame of the network because
at once the data can be received by multiple nodes.
Multicasting is considered as the special case of broadcasting as.it works in
similar to Broadcasting, but in Multicasting, the information is sent to the
targeted or specific members of the network. This task can be accomplished by
transmitting individual copies to each user or node present in the network, but
sending individual copies to each user is inefficient and might increase the
network latency. To overcome these shortcomings, multicasting allows a single
transmission that can be split up among the multiple users, consequently, this

reduces the bandwidth of the signal.


Applications : Multicasting is used in many areas like:
1. Internet protocol (IP)
2. Streaming Media
3. It also supports video conferencing applications and webcasts.
– Multicasting use classful addressing of IP address of class – D which ranges
from 224.0.0.0 to 239.255.255.255
IP Multicast : Multicasting that takes place over the Internet is known as IP
Multicasting. These multicast follow the internet protocol(IP) to transmit data. IP
multicasting uses a mechanism known as ‘Multicast trees’ to transmit to
information among the users of the network. Multicast trees; allows a single
transmission to branch out to the desired receivers. The branches are created
at the Internet routers, the branches are created such that the length of the
transmission will be minimum.
IP multicasts also use two other essential protocols to function; Internet Group
Management Protocol (IGMP), Protocol Independent Multicast (PIM). IGMP
allows the recipients to access the data or information i.e if any host wants to
receive the message that is going to be multicasted, they must join the group
using this protocol. The network routers use PIM to create multicast trees. To
sum up, Multicasting is an efficient way of communication; it reduces the
bandwidth usage and is used when a message is to be sent to a large number
of selected individuals.
Intradomain Multicast Protocols
Intra domain routing protocols carry out the multicast function within domains. The
implementation of multicast routing faces the following particular challenges:

o Dynamic change in the group membership


o Minimizing network load and avoiding routing loops
o Finding concentration points of traffic

In practice, several protocols play major roles in establishing multicast connections.


The Distance Vector Multicast Routing Protocol (DVMRP) and the Internet Group
Management Protocol (IGMP) are the two original protocols forming the early version of
the multicast backbone (MBone). Other protocols, such as Multicast Open Shortest Path
First (MOSPF), core -based trees (CBT), and protocol-independent multicast (PIM) enhance
MBone performance.

15.2.1. Distance Vector Multicast Routing Protocol (DVMRP)


The Distance Vector Multicast Routing Protocol (DVMRP) is one of the oldest multicast
protocols. It is based on a concept of exchanging routing table information among directly
connected neighboring routers. The MBone topology can enable multiple tunnels to run
over a common physical link. Each participating router maintains information about all the
destinations within the system. DVMRP creates multicast trees, using the dense-mode
algorithm. A multicast router typically implements several other independent routing
protocols besides DVMRP for multicast routing, such as RIP or OSPF for unicast routing.

This protocol is not designed for WANs with a large number of nodes, since it can support
only a limited number of hops for a packet. This is clearly considered a drawback of this
protocol, as it causes packet discarding if a packet does not reach its destination within the
maximum number of hops set. Another constraint in this protocol is the periodic expansion
of a multicast tree, leading to periodic broadcasting of multicast data. This constraint in turn
causes the issue of periodic broadcast of routing tables, which consumes a large available
bandwidth. DVMRP supports only the source-based multicast tree. Thus, this protocol is
appropriate for multicasting data among a limited number of distributed receivers located
close to the source.

15.2.2. Internet Group Management Protocol (IGMP)


The Internet Group Management Protocol (IGMP) is used for TCP/IP between a receiver
and its immediate multicast-enabled routers reporting multicast group information. This
protocol has several versions and is required on all machines that receive IP multicast. As
the name suggests, IGMP is a group-oriented management protocol that provides a
dynamic service to registered individual hosts in a multicast group on a particular network.

When it sends an IGMP message to its local multicast router, a network host it in fact
identifies the group membership. Routers are usually sensitive to these types of messages
and periodically send out queries to discover which subnet groups are active. When a host
wants to join a group, the group's multicast address receives an IGMP message stating the
group membership. The local multicast router receives this message and constructs all
routes by propagating the group membership information to other multicast routers
throughout the network.

The IGMP packet format has several versions; Figure 15.4 shows version 3. The first 8 bits
indicate the message type : which may be one of membership query , membership report
v 1 , membership report v 2 , leave group , and membership report v 3 . Hosts send IGMP
membership reports corresponding to a particular multicast group, expressing an interest in
joining that group.

Figure 15.4. IGPM packet format

IGMP is compatible with TCP/IP, so the TCP/IP stack running on a host forwards the IGMP
membership report when an application opens a multicast socket. A router periodically
transmits an IGMP membership query to verify that at least one host on the subnet is still
interested in receiving traffic directed to that group. In this case, if no response to three
consecutive IGMP membership queries is received, the router times out the group and
stops forwarding traffic directed toward that group. (Note that v i refers to the version of the
protocol the membership report belongs to.)

IGMP version 3 supports include and exclude modes. In include mode, a receiver
announces the membership to a host group and provides a list of source addresses from
which it wants to receive traffic. With exclude mode , a receiver expresses the membership
to a multicast group and provides a list of source addresses from which it does not want to
receive traffic. With the leave group message, hosts can report to the local multicast router
that they intend to leave the group. If any remaining hosts are interested in receiving the
traffic, the router transmits a group-specific query. In this case, if the router receives no
response, the router times out the group and stops forwarding the traffic.

The next 8 bits, max response time , are used to indicate the time before sending a
response report (default, 10 seconds). The Resv field is set to 0 and is reserved for future
development. The next field is S flag, to suppress router-side processing. QRV indicates the
querier's robustness variable. QQIC is the querier's query interval code. N shows the
number of sources, and Source Address [ i ] provides a vector of N individual IP addresses.

15.2.3. Multicast OSPF (MOSPF) Protocol


The Multicast Open Shortest Path First (MOSPF) protocol, an extension to the unicast
model of OSPF discussed in Chapter 7, constructs a link-state database with an
advertisement mechanism. Let's explore what new features a link-state router requires to
become capable of multicast functions.

Link-State Multicast
As explained in Chapter 7, link-state routing occurs when a node in a network has to obtain
the state of its connected links and then send an update to all the other routers once the
state changes. On receipt of the routing information, each router reconfigures the entire
topology of the network. The link-state routing algorithm uses Dijkstra's algorithm to
compute the least-cost path.

The multicast feature can be added to a router using the link-state routing algorithm by
placing the spanning tree root at the router. The router uses the tree to identify the best next
node. To include the capability of multicast support, the link-state router needs the set of
groups that have members on a particular link added to the state for that link. Therefore,
each LAN attached to the network must have its host periodically announce all groups it
belongs to. This way, a router simply detects such announcements from LANs and updates
its routing table.

Details of MOSPF
With OSPF, a router uses the flooding technique to send a packet to all routers within the
same hierarchical area and to all other network routers. This simply allows all MOSPF
routers in an area to have the same view of group membership. Once a link-state table is
created, the router computes the shortest path to each multicast member by using Dijkstra's
algorithm. This protocol for a domain with n LANs is summarized as follows .

Begin MOSPF Protocol


1. Define: N i = LAN i in the domain, i ˆˆ{1, 2, ..., n }

G j (i) = Multicast group j constructed with connections to N i

R i = Router attached to N i

2. Multicast groups: R i maintains all N j group memberships.


3. Update: Use the link-state multicast whereby each router R i floods all its multicast
group numbers , G j (i) , to all its directly attached routers.
4. Using the shortest-path tree algorithm, each router constructs a least-cost tree for all
destination groups.
5. When it arrives at a router, a multicast packet finds the right tree, makes the
necessary number of copies of the packet, and routes the copies.

MOSPF adds a link-state field, mainly membership information about the group of LANs
needing to receive the multicast packet. MOSPF also uses Dijkstra's algorithm and
calculates a shortest-path multicast tree. MOSPF does not flood multicast traffic everywhere
to create state. Dijkstra's algorithm must be rerun when group membership changes.
MOSPF does not support sparse-mode tree (shared-tree) algorithm. Each OSPF router
builds the unicast routing topology, and each MOSPF router can build the shortest-path tree
for each source and group. Group-membership reports are broadcast throughout the OSPF
area. MOSPF is a dense-mode protocol, and the membership information is broadcast to all
MOSPF routers. Note that frequent broadcasting of membership information degrades
network performance.

Example.In Figure 15.5, each of the five LANs in a certain domain has an associated
router. Show an example of MOSPF for multicasting from router R 1 to seven
servers located in LANs 1, 2, and 3.

Figure 15.5. Use of MOSPF to multicast from router R 1 to seven servers located
in two different multicast groups spread over three LANs

Solution.Multicast groups 1 and 2 are formed . For group 1, the best tree is implemented
using copying root R 4 . For group 2, the best tree is implemented using copying
root R 7 .
15.2.4. Protocol-Independent Multicast (PIM)
Protocol-independent multicast (PIM) is an excellent multicast protocol for networks,
regardless of size and membership density. PIM is "independent" because it implements
multicasting independently of any routing protocol entering into the multicast routing
information database. PIM can operate in both dense mode and sparse mode . Dense-
mode is a flood-and-prune protocol and is best suited for networks densely populated by
receivers and with enough bandwidth. This version of PIM is comparable to DVMRP.

Sparse-mode PIM typically offers better stability because of the way it establishes trees.
This version assumes that systems can be located far away from one another and that
group members are " sparsely " distributed across the system. This protocol for a domain
with n LANs is summarized as follows.

Begin PIM Algorithm

1. Define: N i = LAN i in the domain i ˆˆ{1, 2, ..., n }

G j (i) = Multicast group j constructed with connections to N i

R i = The router attached to N i

2. Multicast groups: R i maintains all N j group memberships.


3. Rendezvous router: Using the sparse-mode tree algorithm, each
group G j (i) chooses a rendezvous router.
4. Update: Each Router R i updates its rendezvous router whenever there is a change
in the group.
5. When it arrives at a rendezvous router, a multicast packet finds the right tree and
routes the packet.

Sparse-mode PIM first creates a shared tree, followed by one or more source-specific trees
if there is enough traffic demand. Routers join and leave the multicast
group join and prune by protocol messages. These messages are sent to a rendezvous
router allocated to each multicast group. The rendezvous point is chosen by all the routers
in a group and is treated as a meeting place for sources and receivers. Receivers join the
shared tree, and sources register with that rendezvous router. Note that the shortest-path
algorithm made by the unicast routing is used in this protocol for the construction of trees.

Each source registers with its rendezvous router and sends a single copy of multicast data
through it to all registered receivers. Group members register to receive data through the
rendezvous router. This registration process triggers the shortest-path tree between the
source and the rendezvous router. Each source transmits multicast data packets
encapsulated in unicast packets to the rendezvous router. If receivers have joined the group
in the rendezvous router, the encapsulation is stripped off the packet, and it is sent on the
shared tree. This kind of multicast forwarding state between the source and the rendezvous
router enables the rendezvous router to receive the multicast traffic from the source and to
avoid the overhead of encapsulation.

Example.In Figure 15.6, the five LANs in a certain domain each have an associated router.
Show an example of sparse-mode PIM for multicasting from router R 1 to four
servers located in a multicast group spread over LANs 2 and 3.

Figure 15.6. Sparse-mode PIM for multicasting from router R 1 to four servers in
a multicast group spread over LANs 2 and 3

Solution.Multicasting is formed with R 3 the associated rendezvous router for this group.
Thus, the group uses a reverse unicast path as shown to update the rendezvous
router of any joins and leaves . For this group, the multicast tree is implemented
using copying root R 7 .
15.2.5. Core-Based Trees (CBT) Protocol
In sparse mode, forwarding traffic to the rendezvous router and then to receivers causes
delay and longer paths. This issue can be partially solved in the core-based tree (CBT)
protocol, which uses bidirectional trees. Sparse-mode PIM is comparable to CBT but with
two differences. First, CBT uses bidirectional shared trees, whereas sparse-mode PIM uses
unidirectional shared trees. Clearly, bidirectional shared trees are more efficient when
packets move from a source to the root of the multicast tree; as a result, packets can be
sent up and down in the tree. Second, CBT uses only a shared tree and does not use
shortest-path trees.

CBT is comparable to DVMRP in WANs and uses the basic sparse-mode paradigm to
create a single shared tree used by all sources. As with the shared-trees algorithm, the tree
must be rooted at a core point. All sources send their data to the core point, and all
receivers send explicit join messages to the core. In contrast, DVMRP is costly, since it
broadcasts packets, causing each participating router to become overwhelmed and thereby
requiring keeping track of every source-group pair. CBT's bidirectional routing takes into
account the current group membership when it is being established.
When broadcasting occurs, it results in traffic concentration on a single link, since CBT has
a single delivery tree for each group. Although it is designed for intradomain multicast
routing, this protocol is appropriate for interdomain applications as well. However, it can
suffer from latency issues, as in most cases, the traffic must flow through the core router.
This type of router is a bottleneck if its location is not carefully selected, especially when
transmitters and receivers are far apart.

15.2.6. Multicast Backbone (MBone)


The first milestone in the creation of a practical multicast platform was the development of
the multicast backbone (MBone), which carried its first worldwide event when several sites
received audio simultaneously . The multicast routing function was implemented using
unicast-encapsulated multicast packets. The connectivity among certain receivers was
provided using point-to-point IP-encapsulated tunnels . Figure 15.7 shows an example of
tunneling among routers in the early version of MBone. Each tunnel connects two end
points via one logical link and crosses several routers. In this scenario, once a packet is
received, it can be sent to other tunnel end points or broadcast to local members. The
routing in earlier version of MBone was based on DVMRP and IGMP.

Figure 15.7. Tunneling in the multicast backbone

ss
IPv6 Addressing, IPv6 Protocol, Transition from IPv4 to IPv6.Transport Layer
Services, connectionless versus connection oriented protocols. Transport Layer
Protocols: Simple Protocol, Stop and Wal, Go–Back–N, Selective repeat, Piggy
Backing. UDP: User datagram, Services, Applications. TCP: TCP services, TCP
features, segment, A TCP connection, Flow control, error control, congestion control.

IPv6 Addressing
IPv6 (Internet Protocol version 6) introduces significant changes in addressing compared
to its predecessor, IPv4. Here are the key aspects of IPv6 addressing:

IPv6 Address Structure:

1. Length:
 IPv6 addresses are 128 bits long, compared to the 32-bit addresses in IPv4.
 Written in hexadecimal, each digit represents 4 bits.
2. Notation:
 IPv6 addresses are typically written as eight groups of four hexadecimal digits
separated by colons (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334).
3. Blocks and Groups:
 IPv6 addresses are divided into eight 16-bit blocks.
 These blocks are separated by colons.
4. Compression:
 Consecutive blocks of zeros can be compressed with :: (double colons) once in an
IPv6 address.
5. Address Types:
 Unicast Address: Identifies a single interface.
 Multicast Address: Used to send data to multiple interfaces.
 Anycast Address: Assigned to multiple interfaces, and the data is sent to the
nearest one.

IPv6 Address Types:

1. Global Unicast Address:


 Similar to public IPv4 addresses.
 Routable on the global Internet.
2. Link-Local Address:
 Used for communication within the same link or network segment.
 Starts with fe80::/10.
3. Unique Local Address:
 Similar to IPv4's private address space (e.g., 10.0.0.0/8).
 Intended for local use within an organization.
4. Loopback Address:
 Equivalent to IPv4's 127.0.0.1.
 Represented as ::1.
5. Multicast Address:
 Used for one-to-many communication.
 Starts with ff00::/8.
6. Anycast Address:
 Assigned to multiple devices, and the data is sent to the nearest one.
 Assigned in a way that the routing infrastructure determines the nearest instance.

IPv6 Address Configuration:

1. Stateless Address Autoconfiguration (SLAAC):


 Hosts can automatically configure their IPv6 addresses without the need for a
DHCP server.
 Uses the link-local prefix and the network's prefix information.
2. DHCPv6 (Dynamic Host Configuration Protocol for IPv6):
 Similar to DHCP in IPv4 but adapted for IPv6.
 Can provide additional configuration options beyond basic address assignment.

IPv6 Address Examples:

 Global Unicast Address:


makefileCopy code
2001:0db8:85a3:0000:0000:8a2e:0370:7334
 Link-Local Address:
arduinoCopy code
fe80::1
 Unique Local Address:
arduinoCopy code
fc00:1234:5678:9abc::1
 Multicast Address:
arduinoCopy code
ff02::1

IPv6 addresses are designed to accommodate the growing number of devices on the
Internet and overcome the limitations of IPv4 addressing. The expanded address space
and improved features make IPv6 a crucial part of the future of networking.
ADDRESSING
 128-Bit Addresses:
 IPv6 addresses are 128 bits long, providing an exponentially larger address space
compared to IPv4's 32-bit addresses.
 Hexadecimal Notation:
 IPv6 addresses are represented in hexadecimal, making them more human-
readable.
 Blocks and Colons:
 IPv6 addresses are divided into eight 16-bit blocks, separated by colons.
 Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
 Zero Compression:
 Consecutive blocks of zeros can be compressed with ::, but this can only be used
once in an address.

2. Types of Addresses:

 Unicast:
 Identifies a single network interface.
 Global unicast, link-local, and unique local are common types.
 Multicast:
 Used to send data to multiple recipients simultaneously.
 Starts with the prefix ff00::/8.
 Anycast:
 Assigned to multiple interfaces, and the data is delivered to the nearest
(topologically closest) one.

3. Address Configuration:

 Stateless Address Autoconfiguration (SLAAC):


 Hosts can configure their IPv6 addresses automatically using information from
router advertisements.
 DHCPv6 (Dynamic Host Configuration Protocol for IPv6):
 Can be used for more advanced address configuration, providing additional
options beyond basic address assignment.

4. Router Advertisements:

 Router Discovery:
 Routers periodically send Router Advertisement (RA) messages to announce their
presence on the network.
 Prefix Information:
 Routers include prefix information in RAs, which hosts use to form their IPv6
addresses.

5. Header Simplification:

 Simplified Header:
 The IPv6 header is simpler than IPv4, reducing processing overhead on routers
and switches.
 No Broadcast:
 IPv6 eliminates broadcast communication in favor of multicast and anycast.

6. IPsec Integration:

 IPsec Support:
 IPv6 includes IPsec (Internet Protocol Security) as a mandatory part of the
protocol suite, enhancing security features.

7. Transition Mechanisms:

 Dual-Stack:
 Both IPv4 and IPv6 coexist on the same network during the transition period.
 Tunneling:
 IPv6 packets are encapsulated within IPv4 packets for transmission over IPv4
networks.

IPv6 Services and Application


 Internet Services:
 IPv6 supports the same services as IPv4, including web browsing, email, and file
transfer.
 Applications:
 Many modern applications and services are designed to work seamlessly with both
IPv4 and IPv6.

IPv6 is gradually being adopted globally to accommodate the growing number of devices
connected to the internet and overcome the limitations posed by IPv4 address exhaustion.
The transition to IPv6 is an essential step for the continued expansion and sustainability
of the internet.
In the current scenario, the IPv4 address is exhausted and IPv6 had come to overcome
the limit.
Various organization is currently working with IPv4 technology and in one day we
can’t switch directly from IPv4 to IPv6. Instead of only using IPv6, we use combination
of both and transition means not replacing IPv4 but co-existing of both.
When we want to send a request from an IPv4 address to an IPv6 address, but it isn’t
possible because IPv4 and IPv6 transition is not compatible. For a solution to this
problem, we use some technologies. These technologies are Dual Stack Routers,
Tunneling, and NAT Protocol Translation. These are explained as following below.
1. Dual-Stack Routers:
In dual-stack router, A router’s interface is attached with IPv4 and IPv6 addresses
configured are used in order to transition from IPv4 to IPv6.

In this above diagram, A given server with both IPv4 and IPv6 addresses configured
can communicate with all hosts of IPv4 and IPv6 via dual-stack router (DSR). The dual
stack router (DSR) gives the path for all the hosts to communicate with the server
without changing their IP addresses.
2. Tunneling:
Tunneling is used as a medium to communicate the transit network with the different IP
versions.

In this above diagram, the different IP versions such as IPv4 and IPv6 are present. The
IPv4 networks can communicate with the transit or intermediate network on IPv6 with
the help of the Tunnel. It’s also possible that the IPv6 network can also communicate
with IPv4 networks with the help of a Tunnel.
3. NAT Protocol Translation:
With the help of the NAT Protocol Translation technique, the IPv4 and IPv6 networks
can also communicate with each other which do not understand the address of different
IP version.
Generally, an IP version doesn’t understand the address of different IP version, for the
solution of this problem we use NAT-PT device which removes the header of first
(sender) IP version address and add the second (receiver) IP version address so that the
Receiver IP version address understand that the request is sent by the same IP version,
and its vice-versa is also possible.
In the above diagram, an IPv4 address communicates with the IPv6 address via a NAT-
PT device to communicate easily. In this situation, the IPv6 address understands that
the request is sent by the same IP version (IPv6) and it responds.

The transport Layer is the second layer in the TCP/IP model and the fourth layer in
the OSI model. It is an end-to-end layer used to deliver messages to a host. It is termed
an end-to-end layer because it provides a point-to-point connection rather than hop-to-
hop, between the source host and destination host to deliver the services reliably. The
unit of data encapsulation in the Transport Layer is a segment.
Working of Transport Layer
The transport layer takes services from the Application layer and provides services to
the Network layer.
At the sender’s side: The transport layer receives data (message) from the Application
layer and then performs Segmentation, divides the actual message into segments, adds
the source and destination’s port numbers into the header of the segment, and transfers
the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and
forwards the message to the appropriate port in the Application layer.
Responsibilities of a Transport Layer
 The Process to Process Delivery
 End-to-End Connection between Hosts
 Multiplexing and Demultiplexing
 Congestion Control
 Data integrity and Error correction
 Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source-destination hosts to correctly
deliver a frame and the Network layer requires the IP address for appropriate routing of
packets, in a similar way Transport Layer requires a Port number to correctly deliver
the segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16-bit address used to identify any client-server
program uniquely.
Process to Process Delivery

2. End-to-end Connection between Hosts


The transport layer is also responsible for creating the end-to-end Connection between
hosts for which it mainly uses TCP and UDP. TCP is a secure, connection-orientated
protocol that uses a handshake protocol to establish a robust connection between two
end hosts. TCP ensures the reliable delivery of messages and is used in various
applications. UDP, on the other hand, is a stateless and unreliable protocol that ensures
best-effort delivery. It is suitable for applications that have little concern with flow or
error control and requires sending the bulk of data like video conferencing. It is often
used in multicasting protocols.
End to End Connection.

3. Multiplexing and Demultiplexing


Multiplexing(many to one) is when data is acquired from several processes from the
sender and merged into one packet along with headers and sent as a single packet.
Multiplexing allows the simultaneous use of different processes over a network that is
running on a host. The processes are differentiated by their port numbers. Similarly,
Demultiplexing(one to many is required at the receiver side when the message is
distributed into different processes. Transport receives the segments of data from the
network layer distributes and delivers it to the appropriate process running on the
receiver’s machine.

Multiplexing and Demultiplexing

4. Congestion Control
Congestion is a situation in which too many sources over a network attempt to send
data and the router buffers start overflowing due to which loss of packets occurs. As a
result, the retransmission of packets from the sources increases the congestion further.
In this situation, the Transport layer provides Congestion Control in different ways. It
uses open-loop congestion control to prevent congestion and closed-loop congestion
control to remove the congestion in a network once it occurred. TCP provides AIMD –
additive increases multiplicative decrease and leaky bucket technique for congestion
control.

Leaky Bucket Congestion Control Technique

5. Data integrity and Error Correction


The transport layer checks for errors in the messages coming from the application layer
by using error detection codes, and computing checksums, it checks whether the
received data is not corrupted and uses the ACK and NACK services to inform the
sender if the data has arrived or not and checks for the integrity of data.

Error Correction using Checksum


6. Flow Control
The transport layer provides a flow control mechanism between the adjacent layers of
the TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver by
imposing some flow control techniques. It uses the method of sliding window protocol
which is accomplished by the receiver by sending a window back to the sender
informing the size of data it can receive.

Difference between Connection-oriented and Connection-less Services


Read
Discuss
Courses


Both Connection-oriented service and Connection-less service are used for the
connection establishment between two or more two devices. These types of services are
offered by the network layer.
Connection-oriented service is related to the telephone system. It includes connection
establishment and connection termination. In a connection-oriented service, the
Handshake method is used to establish the connection between sender and receiver.

Connection-less service is related to the postal system. It does not include any
connection establishment and connection termination. Connection-less Service does not
give a guarantee of reliability. In this, Packets do not follow the same path to reach their
destination.

Difference between Connection-oriented and Connection-less Services:


S.NO Connection-oriented Service Connection-less Service

Connection-oriented service is related to Connection-less service is related to


1. the telephone system. the postal system.

Connection-oriented service is preferred Connection-less Service is preferred


2. by long and steady communication. by bursty communication.
S.NO Connection-oriented Service Connection-less Service

Connection-less Service is not


Connection-oriented Service is necessary.
3. compulsory.

Connection-less Service is not


Connection-oriented Service is feasible.
4. feasible.

In connection-oriented Service, In connection-less Service,


5. Congestion is not possible. Congestion is possible.

Connection-oriented Service gives the Connection-less Service does not


6. guarantee of reliability. give a guarantee of reliability.

In connection-oriented Service, Packets In connection-less Service, Packets


7. follow the same route. do not follow the same route.

Connection-oriented services require a Connection-less Service requires a


8. bandwidth of a high range. bandwidth of low range.

9. Ex: TCP (Transmission Control Protocol) Ex: UDP (User Datagram Protocol)

Connection-oriented requires Connection-less Service does not


10. authentication. require authentication.

The transport layer is the fourth layer in the OSI model and the second layer in the
TCP/IP model. The transport layer provides with end to end connection between the
source and the destination and reliable delivery of the services. Therefore transport
layer is known as the end-to-end layer. The transport layer takes the services from its
upward layer which is the application layer and provides it to the network layer.
Segment is the unit of data encapsulation at the transport layer.
Functions of Transport Layer
 Process to process delivery
 End-to-end connection between devices
 Multiplexing and Demultiplexing
 Data integrity and error Correction
 Congestion Control
 Flow Control
Transport Layer Protocols
The transport layer is represented majorly by TCP and UDP protocols. Today almost all
operating systems support multiprocessing multi-user environments. This transport
layer protocol provides connections to the individual ports. These ports are known as
protocol ports. Transport layer protocols work above the IP protocols and deliver the
data packets from IP serves to destination port and from the originating port to
destination IP services. Below are the protocols used at the transport layer.
1. UDP
UDP stands for User Datagram Protocol. User Datagram Protocol provides a
nonsequential transmission of data. It is a connectionless transport protocol. UDP
protocol is used in applications where the speed and size of data transmitted is
considered as more important than the security and reliability. User Datagram is defined
as a packet produced by User Datagram Protocol. UDP protocol adds checksum error
control, transport level addresses, and information of length to the data received from
the layer above it. Services provided by User Datagram Protocol(UDP) are
connectionless service, faster delivery of messages, checksum, and process-to-process
communication.
Advantages of UDP
 UDP also provides multicast and broadcast transmission of data.
 UDP protocol is preferred more for small transactions such as DNS lookup.
 It is a connectionless protocol, therefore there is no compulsion to have a
connection-oriented network.
 UDP provides fast delivery of messages.
Disadvantages of UDP
 In UDP protocol there is no guarantee that the packet is delivered.
 UDP protocol suffers from worse packet loss.
 UDP protocol has no congestion control mechanism.
 UDP protocol does not provide the sequential transmission of data.
2. TCP
TCP stands for Transmission Control Protocol. TCP protocol provides transport layer
services to applications. TCP protocol is a connection-oriented protocol. A secured
connection is being established between the sender and the receiver. For a generation of
a secured connection, a virtual circuit is generated between the sender and the receiver.
The data transmitted by TCP protocol is in the form of continuous byte streams. A
unique sequence number is assigned to each byte. With the help of this unique number,
a positive acknowledgment is received from receipt. If the acknowledgment is not
received within a specific period the data is retransmitted to the specified destination.
Advantages of TCP
 TCP supports multiple routing protocols.
 TCP protocol operates independently of that of the operating system.
 TCP protocol provides the features of error control and flow control.
 TCP provides a connection-oriented protocol and provides the delivery of data.
Disadvantages of TCP
 TCP protocol cannot be used for broadcast or multicast transmission.
 TCP protocol has no block boundaries.
 No clear separation is being offered by TCP protocol between its interface, services,
and protocols.
 In TCP/IP replacement of protocol is difficult.
3. SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a connection-oriented
protocol. Stream Control Transmission Protocol transmits the data from sender to
receiver in full duplex mode. SCTP is a unicast protocol that provides with point to
point-to-point connection and uses different hosts for reaching the destination. SCTP
protocol provides a simpler way to build a connection over a wireless network. SCTP
protocol provides a reliable transmission of data. SCTP provides a reliable and easier
telephone conversation over the internet. SCTP protocol supports the feature of
multihoming ie. it can establish more than one connection path between the two points
of communication and does not depend on the IP layer. SCTP protocol also
Advantages of SCTP
 SCTP provides a full duplex connection. It can send and receive the data
simultaneously.
 SCTP protocol possesses the properties of both TCP and UDP protocol.
 SCTP protocol does not depend on the IP layer.
 SCTP is a secure protocol.
Disadvantages of SCTP
 To handle multiple streams simultaneously the applications need to be modified
accordingly.
 The transport stack on the node needs to be changed for the SCTP protocol.
 Modification is required in applications if SCTP is used instead of TCP or UDP
protocol.

Stop and Wait


Characteristics

 Used in Connection-oriented communication.


 It offers error and flows control
 It is used in Data Link and Transport Layers
 Stop and Wait for ARQ mainly implements the Sliding Window Protocol concept
with Window Size 1

Useful Terms:

 Propagation Delay: Amount of time taken by a packet to make a physical journey


from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
 RoundTripTime (RTT) = Amount of time taken by a packet to reach the receiver +
Time taken by the Acknowledgement to reach the sender
 TimeOut (TO) = 2* RTT
 Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 255 seconds)
Simple Stop and Wait
Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the previous.

Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
Problems :
1. Lost Data

2. Lost Acknowledgement:

3. Delayed Acknowledgement/Data: After a timeout on the sender side, a long-


delayed acknowledgement might be wrongly considered as acknowledgement of some
other recent packet.

Stop and Wait for ARQ (Automatic Repeat Request)


The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat
Request) that does both error control and flow control.

1. Time Out:

2. Sequence Number (Data)

3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.

Working of Stop and Wait for ARQ:


1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with sequence
number 1 (the sequence number of the next expected data frame or packet)
There is only a one-bit sequence number that implies that both sender and receiver have
a buffer for one frame or packet only.
Characteristics of Stop and Wait ARQ:
 It uses a link between sender and receiver as a half-duplex link
 Throughput = 1 Data packet/frame per RTT
 If the Bandwidth*Delay product is very high, then they stop and wait for protocol if
it is not so useful. The sender has to keep waiting for acknowledgements before
sending the processed next packet.
 It is an example of “Closed Loop OR connection-oriented “ protocols
 It is a special category of SWP where its window size is 1
 Irrespective of the number of packets sender is having stop and wait for protocol
requires only 2 sequence numbers 0 and 1
Constraints:
Stop and Wait ARQ has very less efficiency , it can be improved by increasing the
window size. Also , for better efficiency , Go back N and Selective Repeat Protocols are
used.
The Stop and Wait ARQ solves the main three problems but may cause big
performance issues as the sender always waits for acknowledgement even if it has the
next packet ready to send. Consider a situation where you have a high bandwidth
connection and propagation delay is also high (you are connected to some server in
some other country through a high-speed connection). To solve this problem, we can
send more than one packet at a time with a larger sequence number. We will be
discussing these protocols in the next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for example
LAN connections but performs badly for distant connections like satellite connections.
Advantages of Stop and Wait ARQ :
 Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to
implement in both hardware and software. It does not require complex algorithms or
hardware components, making it an inexpensive and efficient option.
 Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using
checksums or cyclic redundancy checks (CRC). If an error is detected, the receiver
sends a negative acknowledgment (NAK) to the sender, indicating that the data
needs to be retransmitted.
 Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in
order. The receiver cannot move on to the next data packet until it receives the
current one. This ensures that the data is received in the correct order and eliminates
the possibility of data corruption.
 Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver
can control the rate at which the sender transmits data. This is useful in situations
where the receiver has limited buffer space or processing power.
 Backward Compatibility: Stop and Wait ARQ is compatible with many existing
systems and protocols, making it a popular choice for communication over
unreliable channels.
Disadvantages of Stop and Wait ARQ :
 Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to
wait for an acknowledgment from the receiver before sending the next data packet.
This results in a low data transmission rate, especially for large data sets.
 High Latency: Stop and Wait ARQ introduces additional latency in the
transmission of data, as the sender must wait for an acknowledgment before sending
the next packet. This can be a problem for real-time applications such as video
streaming or online gaming.
 Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available
bandwidth efficiently, as the sender can transmit only one data packet at a time. This
results in underutilization of the channel, which can be a problem in situations
where the available bandwidth is limited.
 Limited Error Recovery: Stop and Wait ARQ has limited error recovery
capabilities. If a data packet is lost or corrupted, the sender must retransmit the
entire packet, which can be time-consuming and can result in further delays.
 Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise,
which can cause errors in the transmitted data. This can result in frequent
retransmissions and can impact the overall efficiency of the protocol.

Selective Repeat
Why Selective Repeat Protocol? The go-back-n protocol works well if errors are less,
but if the line is poor it wastes a lot of bandwidth on retransmitted frames. An
alternative strategy, the selective repeat protocol, is to allow the receiver to accept and
buffer the frames following a damaged or lost one. Selective Repeat attempts to
retransmit only those packets that are actually lost (due to errors) :
 Receiver must be able to accept packets out of order.
 Since receiver must release packets to higher layer in order, the receiver must be
able to buffer some packets.
Retransmission requests :
 Implicit – The receiver acknowledges every good packet, packets that are not
ACKed before a time-out are assumed lost or in error.Notice that this approach must
be used to be sure that every packet is eventually received.
 Explicit – An explicit NAK (selective reject) can request retransmission of just one
packet. This approach can expedite the retransmission but is not strictly needed.
 One or both approaches are used in practice.
Selective Repeat Protocol (SRP) : This protocol(SRP) is mostly identical to GBN
protocol, except that buffers are used and the receiver, and the sender, each maintains a
window of size. SRP works better when the link is very unreliable. Because in this case,
retransmission tends to happen more frequently, selectively retransmitting frames is
more efficient than retransmitting all of them. SRP also requires full-duplex link.
backward acknowledgements are also in progress.
 Sender’s Windows ( Ws) = Receiver’s Windows ( Wr).
 Window size should be less than or equal to half the sequence number in SR
protocol. This is to avoid packets being recognized incorrectly. If the size of the
window is greater than half the sequence number space, then if an ACK is lost, the
sender may send new packets that the receiver believes are retransmissions.
 Sender can transmit new packets as long as their number is with W of all unACKed
packets.
 Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if NAK is
employed.
 Receiver ACKs all correct packets.
 Receiver stores correct packets until they can be delivered in order to the higher
layer.
 In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2^m.

Figure – the sender only retransmits frames, for which a NAK is receivedEfficiency of
Selective Repeat Protocol (SRP) is same as GO-Back-N’s efficiency :
Efficiency = N/(1+2a)
Where a = Propagation delay / Transmission delay
Buffers = N + N
Sequence number = N(sender side) + N ( Receiver Side)

if Tt(ack) :Transmission delay for acknowledgment , Tq: Queuing delay and Tpro:
Processing delay is mention
We know that the Efficiency (?),
=Useful time / Total cycle time
=Tt(data) /Tt(data) + 2*Tp + Tq + Tpro + Tt(ack)
Tt(data) : Transmission delay for Data packet
Tp : propagation delay for Data packet
Tq: Queuing delay
Tpro: Processing delay
Tt(ack): Transmission delay for acknowledgment
Above formula is applicable for any condition, if any of the things are not given we
assume it to be 0.
Piggybacking
Piggybacking is the technique of delaying outgoing acknowledgment and attaching it to
the next data packet.
When a data frame arrives, the receiver waits and does not send the control frame
(acknowledgment) back immediately. The receiver waits until its network layer moves
to the next data packet. Acknowledgment is associated with this outgoing data frame.
Thus the acknowledgment travels along with the next data frame. This technique in
which the outgoing acknowledgment is delayed temporarily is called Piggybacking.
In this article, we will cover the overview of networking communication and mainly
focus on the concept of piggybacking in networks. And we will also discuss the
advantages and disadvantages of using piggybacking in networks. Finally, we will see
the conclusion. Let’s discuss them one by one.
Networking Communication
Sliding window algorithms are methods of flow control for network data transfer.
The data link layer uses a sender to have more than one acknowledgment packet at a
time, which improves network throughput. Both the sender and receiver maintain a
finite-size buffer to hold outgoing and incoming packets from the other side. Every
packet sent by the sender must be acknowledged by the receiver. The sender maintains
a timer for every packet sent, and any packet unacknowledged at a certain time is
resent. The sender may send a whole window of packets before receiving an
acknowledgment for the first packet in the window. This results in higher transfer rates,
as the sender may send multiple packets without waiting for each packet’s
acknowledgment. The receiver advertises a window size that tells the sender not to fill
up the receiver buffers.
How To Increase Network Efficiency?
Efficiency can also be improved by making use of the Full-duplex transmission
mode. Full Duplex transmission is a two-way directional communication
simultaneously which means that it can communicate in both directions, just like we are
using two half-duplex transmission nodes. It provides better performance than simple
transmission modes and half-duplex transmission modes.

Full Duplex Transmission

Why Piggybacking?
Efficiency can also be improved by making use of full-duplex transmission. Full
Duplex transmission is a transmission that happens with the help of two half-duplex
transmissions which helps in communication in both directions. Full Duplex
Transmission is better than both simplex and half-duplex transmission modes.
There are two ways through which we can achieve full-duplex transmission:
1. Two Separate Channels: One way to achieve full-duplex transmission is to have
two separate channels with one for forwarding data transmission and the other for
reverse data transfer (to accept). But this will almost completely waste the bandwidth of
the reverse channel.
2. Piggybacking: A preferable solution would be to use each channel to transmit the
frame (front and back) both ways, with both channels having the same capacity.
Assume that A and B are users. Then the data frames from A to B are interconnected
with the acknowledgment from A to B. and can be identified as a data frame or
acknowledgment by checking the sort field in the header of the received frame.
One more improvement can be made. When a data frame arrives, the receiver waits and
does not send the control frame (acknowledgment) back immediately. The receiver
waits until its network layer moves to the next data packet.
Acknowledgment is associated with this outgoing data frame. Thus the
acknowledgment travels along with the next data frame.
Working of Piggybacking
As we can see in the figure, we can see with piggybacking, a single message (ACK +
DATA) over the wire in place of two separate messages. Piggybacking improves the
efficiency of the bidirectional protocols.
 If Host A has both acknowledgment and data, which it wants to send, then the data
frame will be sent with the ack field which contains the sequence number of the
frame.
 If Host A contains only one acknowledgment, then it will wait for some time, then
in the case, if it finds any data frame, it piggybacks the acknowledgment, otherwise,
it will send the ACK frame.
 If Host A left with only a data frame, then it will add the last acknowledgment to it.
Host A can send a data frame with an ack field containing no acknowledgment bit.
Advantages of Piggybacking
1. The major advantage of piggybacking is the better use of available channel
bandwidth. This happens because an acknowledgment frame needs not to be sent
separately.
2. Usage cost reduction.
3. Improves latency of data transfer.
4. To avoid the delay and rebroadcast of frame transmission, piggybacking uses a very
short-duration timer.
Disadvantages of Piggybacking
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the acknowledgment (blocks the
ACK for some time), the frame will rebroadcast.
Conclusion
There is a dispute as to whether this is a legal or illegal activity, but piggybacking is
still a dark side of Wi-Fi. Cyber-terrorist attacks in India are a clear reminder that we
cannot control incidents occurring anywhere in the world or control unsecured Wi-Fi
networks. So it is the responsibility of the owner and administrator to secure their
wireless connection.

UDP: User datagram


User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the
Internet Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol. So, there is no need to establish a connection prior to data
transfer. The UDP helps to establish low-latency and loss-tolerating connections
establish over the network.The UDP enables process to process communication.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol
used with most of the Internet services; provides assured delivery, reliability, and much
more but all these services cost us additional overhead and latency. Here, UDP comes
into the picture. For real-time services like computer gaming, voice or video
communication, live conferences; we need UDP. Since high performance is needed,
UDP permits packets to be dropped instead of processing delayed packets. There is no
error checking in UDP, so it also saves bandwidth.
User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
UDP Header –
UDP header is an 8-bytes fixed and simple header, while for TCP it may vary from 20
bytes to 60 bytes. The first 8 Bytes contains all necessary header information and the
remaining part consist of data. UDP port number fields are each 16 bits long, therefore
the range for port numbers is defined from 0 to 65535; port number 0 is reserved. Port
numbers help to distinguish different user requests or processes.

1. Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
3. Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
4. Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from
the IP header, and the data, padded with zero octets at the end (if necessary) to make
a multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and ICMP for
error reporting. Also UDP provides port numbers so that is can differentiate between
users requests.
Applications of UDP:
 Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
 It is a suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
 Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
 UDP is widely used in online gaming, where low latency and high-speed
communication is essential for a good gaming experience. Game servers often send
small, frequent packets of data to clients, and UDP is well suited for this type of
communication as it is fast and lightweight.
 Streaming media applications, such as IPTV, online radio, and video conferencing,
use UDP to transmit real-time audio and video data. The loss of some packets can be
tolerated in these applications, as the data is continuously flowing and does not
require retransmission.
 VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use
UDP for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
 DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
 DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the
delay caused by packet loss or retransmission is generally not critical for this
application.
 Following implementations uses UDP as a transport layer protocol:
 NTP (Network Time Protocol)
 DNS (Domain Name Service)
 BOOTP, DHCP.
 NNP (Network News Protocol)
 Quote of the day protocol
 TFTP, RTSP, RIP.
 The application layer can do some of the tasks through UDP-
 Trace Route
 Record Route
 Timestamp
 UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
 Actually, UDP is a null protocol if you remove the checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.
Advantages of UDP:
1. Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
2. Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
3. Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
4. Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
5. Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
Disadvantages of UDP:
1. No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
2. No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.
3. No flow control: UDP does not have flow control, which means that it can
overwhelm the receiver with packets that it cannot handle.
4. Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks, where an
attacker can flood a network with UDP packets, overwhelming the network and causing
it to crash.
5. Limited use cases: UDP is not suitable for applications that require reliable data
delivery, such as email or file transfers, and is better suited for applications that can
tolerate some data loss, such as video streaming or online gaming.
UDP PSEUDO HEADER:
 the purpose of using a pseudo-header is to verify that the UDP packet has reached its
correct destination
 the correct destination consist of a specific machine and a specific protocol port
number within that machine

UDP pseudo header

UDP pseudo header details:


 the UDP header itself specify only protocol port number.thus , to verify the
destination UDP on the sending machine computes a checksum that covers the
destination IP address as well as the UDP packet.
 at the ultimate destination, UDP software verifies the checksum using the destination
IP address obtained from the header of the IP packet that carried the UDP message.
 if the checksum agrees, then it must be true that the packet has reached the intended
destination host as well as the correct protocol port within that host.
User Interface:
A user interface should allow the creation of new receive ports, receive operations on
the receive ports that returns the data octets and an indication of source port and source
address, and an operation that allows a datagram to be sent, specifying the data, source
and destination ports and address to be sent.
IP Interface:
 the UDP module must be able to determine the source and destination internet
address and the protocol field from internet header
 one possible UDP/IP interface would return the whole internet datagram including
the entire internet header in response to a receive operation
 such an interface would also allow the UDP to pass a full internet datagram complete
with header to the IP to send. the IP would verify certain fields for consistency and
compute the internet header checksum.
 The IP interface allows the UDP module to interact with the network layer of the
protocol stack, which is responsible for routing and delivering data across the
network.
 The IP interface provides a mechanism for the UDP module to communicate with
other hosts on the network by providing access to the underlying IP protocol.
 The IP interface can be used by the UDP module to send and receive data packets
over the network, with the help of IP routing and addressing mechanisms.
 The IP interface provides a level of abstraction that allows the UDP module to
interact with the network layer without having to deal with the complexities of IP
routing and addressing directly.
 The IP interface also handles fragmentation and reassembly of IP packets, which is
important for large data transmissions that may exceed the maximum packet size
allowed by the network.
 The IP interface may also provide additional services, such as support for Quality of
Service (QoS) parameters and security mechanisms such as IPsec.
 The IP interface is a critical component of the Internet Protocol Suite, as it enables
communication between hosts on the internet and allows for the seamless
transmission of data packets across the network.

SERVICES
a process that generally provides and gives a common technique for each layer to
communicate with each other. Standard terminology basically required for layered
networks to request and aim for the services are provided. Service is defined as a set of
primitive operations. Services are provided by layer to each of layers above it. Below is
diagram showing relation between layers at an interface. In diagram, layers N+1, N, and
N-1 are involved and engaged in process of communication among each other.
Components Involved and their Functions :
 Service Data Unit (SDU) – SDU is a piece of information or data that is generally
passed by layer just above current layer for transmission. Unit of data or information
is passed down to a lower layer from an OSI (Open System Interconnection) layer or
sublayer. Data is passed with request to transmit data. SDU basically identifies or
determines information that is been transferred among entities of peer layers that are

not interpreted by supporting entities of lower-layer.


 Protocol Data Unit (PDU) – PDU is a single unit of information or data that is
transmitted or transferred among entities of peer layers of a computer network.
When application data is passed down to protocol stack on its way to being
transmitted all over network media, some of protocols add information and data to it
at each and every level. PDU is used to represent and describe data is it gets

transferred from one layer of OSI model to another layer.


 Interface Data Unit (IDU) – IDU is used to have an agreed way of communication
among two layers in a network layered architecture. It is passed from (N+1 to N).
 Service Access Point (SAP) – SAP is generally used as an identifier label for
endpoints of network in OSI networking or model. It is a data structure and
identifier also for a buffer area in memory of system. It is a point in a layer of a
layered architecture where a network is usually provided and where layer just above
layer that provides service can probably have access to it.
 Interface Control Information (ICI) – ICI is a temporary parameter that is passed
between N and N-1 layers to include service functions among two layers.
Benefits :
 Increase in Compatibility – Layered approach to networking and communication
protocols generally provides and shows greater compatibility among all devices,
systems, and networks that they deliver.
 Less expensive – Easy way of development and implementation converts to
increase in an efficiency and even effectiveness that in turn converts into larger
economic rationalization and very cheaper products while not compromising with
quality.
 Increase in Mobility – Whenever we use layered and segmented strategies into
architecture design, there will always be an increase in mobility.
 Better Scalability – Whenever we use a layered or hierarchical approach to
networking protocol, design, and implementation scale much better than horizontal
approach.
TCP FEATURES
TCP (Transmission Control Protocol) is one of the main protocols of the Internet
protocol suite. It lies between the Application and Network Layers which are used in
providing reliable delivery services. It is a connection-oriented protocol for
communications that helps in the exchange of messages between different devices over
a network. The Internet Protocol (IP), which establishes the technique for sending data
packets between computers, works with TCP.

Working of TCP

To make sure that each message reaches its target location intact, the TCP/IP model
breaks down the data into small bundles and afterward reassembles the bundles into the
original message on the opposite end. Sending the information in little bundles of
information makes it simpler to maintain efficiency as opposed to sending everything in
one go.
After a particular message is broken down into bundles, these bundles may travel along
multiple routes if one route is jammed but the destination remains the same.

We can see that the message is being broken down, then reassembled from a different

order at the destination

For example, When a user requests a web page on the internet, somewhere in the world,
the server processes that request and sends back an HTML Page to that user. The server
makes use of a protocol called the HTTP Protocol. The HTTP then requests the TCP
layer to set the required connection and send the HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the Internet
Protocol (IP) layer. The packets are then sent to the destination through different routes.
The TCP layer in the user’s system waits for the transmission to get finished and
acknowledges once all packets have been received.

Features of TCP/IP

Some of the most prominent features of Transmission control protocol are


1. Segment Numbering System
 TCP keeps track of the segments being transmitted or received by assigning
numbers to each and every single one of them.
 A specific Byte Number is assigned to data bytes that are to be transferred while
segments are assigned sequence numbers.
 Acknowledgment Numbers are assigned to received segments.
2. Connection Oriented
 It means sender and receiver are connected to each other till the completion of the
process.
 The order of the data is maintained i.e. order remains same before and after
transmission.
3. Full Duplex
 In TCP data can be transmitted from receiver to the sender or vice – versa at the
same time.
 It increases efficiency of data flow between sender and receiver.
4. Flow Control
 Flow control limits the rate at which a sender transfers data. This is done to ensure
reliable delivery.
 The receiver continually hints to the sender on how much data can be received
(using a sliding window)
5. Error Control
 TCP implements an error control mechanism for reliable data transfer
 Error control is byte-oriented
 Segments are checked for error detection
 Error Control includes – Corrupted Segment & Lost Segment Management, Out-of-
order segments, Duplicate segments, etc.
6. Congestion Control
 TCP takes into account the level of congestion in the network
 Congestion level is determined by the amount of data sent by a sender
Advantages
 It is a reliable protocol.
 It provides an error-checking mechanism as well as one for recovery.
 It gives flow control.
 It makes sure that the data reaches the proper destination in the exact order that it
was sent.
 Open Protocol, not owned by any organization or individual.
 It assigns an IP address to each computer on the network and a domain name to each
site thus making each device site to be distinguishable over the network.
Disadvantages
 TCP is made for Wide Area Networks, thus its size can become an issue for small
networks with low resources.
 TCP runs several layers so it can slow down the speed of the network.
 It is not generic in nature. Meaning, it cannot represent any protocol stack other than
the TCP/IP suite. E.g., it cannot work with a Bluetooth connection.
 No modifications since their development around 30 years ago.
SEGMENTS
1. Process-to-Process Communication –
TCP provides a process to process communication, i.e, the transfer of data that takes
place between individual processes executing on end systems. This is done using
port numbers or port addresses. Port numbers are 16 bits long that help identify
which process is sending or receiving data on a host.

2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP or IP
that divides the bits into datagrams or packets). However, the network layer, that
provides service for the TCP, sends packets of information not streams of bytes.
Hence, TCP groups a number of bytes together into a segment and adds a header to
each of these segments and then delivers these segments to the network layer. At the
network layer, each of these segments is encapsulated in an IP packet for
transmission. The TCP header has information that is required for control purposes
which will be discussed along with the segment structure.

3. Full-duplex service –
This means that the communication can take place in both directions at the same
time.

4. Connection-oriented service –
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different
phases:
 Connection establishment
 Data transfer
 Connection termination
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover lost or
corrupted packets by re-transmission, acknowledgement policy and timers. It uses
features like byte number and sequence number and acknowledgement number so as
to ensure reliability. Also, it uses congestion control mechanisms.

6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends
respectively as a number of logical connections can be established between port
numbers over a physical connection.

Byte number, Sequence number and Acknowledgement number:


All the data bytes that are to be transmitted are numbered and the beginning of this
numbering is arbitrary. Sequence numbers are given to the segments so as to reassemble
the bytes at the receiver end even if they arrive in a different order. The sequence
number of a segment is the byte number of the first byte that is being sent. The
acknowledgement number is required since TCP provides full-duplex service. The
acknowledgement number is the next byte number that the receiver expects to receive
which also provides acknowledgement for receiving the previous bytes.
Example:
In this example we see that A sends acknowledgement number1001, which means that
it has received data bytes till byte number 1000 and expects to receive 1001 next, hence
B next sends data bytes starting from 1001. Similarly, since B has received data bytes
till byte number 13001 after the first data transfer from A to B, therefore B sends
acknowledgement number 13002, the byte number that it expects to receive from A
next.
TCP Segment structure –
A TCP segment consists of data bytes to be sent and a header that is added to the data
by TCP as shown:
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If
there are no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:

 Source Port Address –


A 16-bit field that holds the port address of the application that is sending the data
segment.

 Destination Port Address –


A 16-bit field that holds the port address of the application in the host that is
receiving the data segment.

 Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte
that is sent in that particular segment. It is used to reassemble the message at the
receiving end of the segments that are received out of order.

 Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the
receiver expects to receive next. It is an acknowledgement for the previous bytes
being received successfully.

 Header Length (HLEN) –


This is a 4-bit field that indicates the length of the TCP header by a number of 4-
byte words in the header, i.e if the header is 20 bytes(min length of TCP header),
then this field will hold 5 (because 5 x 4 = 20) and the maximum length: 60 bytes,
then it’ll hold the value 15(because 15 x 4 = 60). Hence, the value of this field is
always between 5 and 15.

 Control flags –
These are 6 1-bit control bits that control connection establishment, connection
termination, connection abortion, flow control, mode of transfer etc. Their function
is:
 URG: Urgent pointer is valid
 ACK: Acknowledgement number is valid( used in case of cumulative
acknowledgement)
 PSH: Request for push
 RST: Reset the connection
 SYN: Synchronize sequence numbers
 FIN: Terminate the connection
 Window size –
This field tells the window size of the sending TCP in bytes.

 Checksum –
This field holds the checksum for error control. It is mandatory in TCP as opposed
to UDP.

 Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that is
urgently required that needs to reach the receiving process at the earliest. The value
of this field is added to the sequence number to get the byte number of the last
urgent byte.

A TCP CONNECTION
Connection Establishment –
1. Sender starts the process with the following:
 Sequence number (Seq=521): contains the random initial sequence number
generated at the sender side.
 Syn flag (Syn=1): request the receiver to synchronize its sequence number with the
above-provided sequence number.
 Maximum segment size (MSS=1460 B): sender tells its maximum segment size, so
that receiver sends datagram which won’t require any fragmentation. MSS field is
present inside Option field in TCP header.
 Window size (window=14600 B): sender tells about his buffer capacity in which he
has to store messages from the receiver.
2. TCP is a full-duplex protocol so both sender and receiver require a window for
receiving messages from one another.
 Sequence number (Seq=2000): contains the random initial sequence number
generated at the receiver side.
 Syn flag (Syn=1): request the sender to synchronize its sequence number with the
above-provided sequence number.
 Maximum segment size (MSS=500 B): receiver tells its maximum segment size, so
that sender sends datagram which won’t require any fragmentation. MSS field is
present inside Option field in TCP header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS i.e., 500 B to
avoid fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
 Window size (window=10000 B): receiver tells about his buffer capacity in which
he has to store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
 Acknowledgement Number (Ack no.=522): Since sequence number 521 is
received by the receiver so, it makes a request for the next sequence number with
Ack no.=522 which is the next packet expected by the receiver since Syn flag
consumes 1 sequence no.
 ACK flag (ACk=1): tells that the acknowledgement number field contains the next
sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
 Sequence number (Seq=522): since sequence number = 521 in 1st step and SYN
flag consumes one sequence number hence, the next sequence number will be 522.
 Acknowledgement Number (Ack no.=2001): since the sender is acknowledging
SYN=1 packet from the receiver with sequence number 2000 so, the next sequence
number expected is 2001.
 ACK flag (ACK=1): tells that the acknowledgement number field contains the next
sequence expected by the sender.

FLOW CONTROL
Flow control is actually set of procedures that explains sender about how much data or
frames it can transfer or transmit before data overwhelms receiver. The receiving device
also contains only limited amount of speed and memory to store data. This is why
receiving device should be able to tell or inform the sender about stopping the
transmission or transferring of data on temporary basis before it reaches limit. It also
needs buffer, large block of memory for just storing data or frames until they are
processed.
flow control can also be understand as a speed matching mechanism for two stations.

Approaches to Flow Control : Flow Control is classified into two categories:


 Feedback – based Flow Control : In this control technique, sender simply
transmits data or information or frame to receiver, then receiver transmits data back
to sender and also allows sender to transmit more amount of data or tell sender
about how receiver is processing or doing. This simply means that sender transmits
data or frames after it has received acknowledgements from user.
 Rate – based Flow Control : In this control technique, usually when sender sends
or transfer data at faster speed to receiver and receiver is not being able to receive
data at the speed, then mechanism known as built-in mechanism in protocol will just
limit or restricts overall rate at which data or information is being transferred or
transmitted by sender without any feedback or acknowledgement from receiver.
Techniques of Flow Control in Data Link Layer : There are basically two types of
techniques being developed to control the flow of data

1. Stop-and-Wait Flow Control : This method is the easiest and simplest form of flow
control. In this method, basically message or data is broken down into various multiple
frames, and then receiver indicates its readiness to receive frame of data. When
acknowledgement is received, then only sender will send or transfer the next frame.
This process is continued until sender transmits EOT (End of Transmission) frame. In
this method, only one of frames can be in transmission at a time. It leads to inefficiency
i.e. less productivity if propagation delay is very much longer than the transmission
delay and Ultimately In this method sender sent single frame and receiver take one
frame at a time and sent acknowledgement(which is next frame number only) for new
frame.
Advantages –
 This method is very easiest and simple and each of the frames is checked and
acknowledged well.
 This method is also very accurate.
Disadvantages –
 This method is fairly slow.
 In this, only one packet or frame can be sent at a time.
 It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable in-order
delivery of packets or frames is very much needed like in data link layer. It is point to
point protocol that assumes that none of the other entity tries to communicate until
current data or frame transfer gets completed. In this method, sender transmits or sends
various frames or packets before receiving any acknowledgement. In this method, both
the sender and receiver agree upon total number of data frames after which
acknowledgement is needed to be transmitted. Data Link Layer requires and uses this
method that simply allows sender to have more than one unacknowledged packet “in-
flight” at a time. This increases and improves network throughput. and Ultimately In
this method sender sent multiple frame but receiver take one by one and after
completing one frame acknowledge(which is next frame number only) for new frame.
Advantages –
 It performs much better than stop-and-wait flow control.
 This method increases efficiency.
 Multiples frames can be sent one after another.
Disadvantages –
 The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
 The receiver might receive data frames or packets out the sequence.

ERROR CONTROL
Ways of doing Error Control : There are basically two ways of doing Error control as
given below :

Ways of Error Control

1. Error Detection : Error detection, as the name suggests, simply means detection or
identification of errors. These errors may occur due to noise or any other
impairments during transmission from transmitter to the receiver, in communication
system. It is a class of techniques for detecting garbled i.e. unclear and distorted data
or messages.
2. Error Correction : Error correction, as the name suggests, simply means correction
or solving or fixing of errors. It simply means reconstruction and rehabilitation of
original data that is error-free. But error correction method is very costly and very
hard.
Various Techniques for Flow Control : There are various techniques of error control
as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as alternating bit
protocol. It is one of the simplest flow and error control techniques or mechanisms.
This mechanism is generally required in telecommunications to transmit data or
information between two connected devices. Receiver simply indicates its readiness to
receive data for each frame. In these, sender sends information or data packets to
receiver. Sender then stops and waits for ACK (Acknowledgment) from receiver.
Further, if ACK does not arrive within given time period i.e., time-out, sender then
again resends frame and waits for ACK. But, if sender receives ACK, then it will
transmit the next data packet to receiver and then again wait for ACK from receiver.
This process to stop and wait continues until sender has no data frame or packet to
send.
2. Sliding Window ARQ : This technique is generally used for continuous
transmission error control. It is further categorized into two categories as given below :
 Go-Back-N ARQ : Go-Back-N ARQ is form of ARQ protocol in which
transmission process continues to send or transmit total number of frames that are
specified by window size even without receiving an ACK (Acknowledgement)
packet from the receiver. It uses sliding window flow control protocol. If no errors
occur, then operation is identical to sliding window.
 Selective Repeat ARQ : Selective Repeat ARQ is also form of ARQ protocol in
which only suspected or damaged or lost data frames are only retransmitted. This
technique is similar to Go-Back-N ARQ though much more efficient than the Go-
Back-N ARQ technique due to reason that it reduces number of retransmission. In
this, the sender only retransmits frames for which NAK is received. But this
technique is used less because of more complexity between sender and receiver and
each frame must be needed to be acknowledged individually.
The main difference between Go Back ARQ and Selective Repeat ARQ is that in Go
Back ARQ, the sender has to retransmit the whole window of frame again if any of the
frame is lost but in Selective Repeat ARQ only the data frame that is lost is
retransmitted.

Congestion Control
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows
down network response time.

Effects of Congestion
 As delay increases, performance decreases.
 If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms


 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithm which are as follows:

 Leaky Bucket Algorithm


 The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the inefficient use
of available network resources.
 The large area of network resources such as bandwidth is not being used effectively.

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water additional
water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
 Token bucket Algorithm
 The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
 It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-


The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket. ƒ


2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,

In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In
figure (B) We see that three of the five packets have gotten through, but the other two
are stuck waiting for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The leaky bucket algorithm
controls the rate at which the packets are introduced in the network, but it is very
conservative in nature. Some flexibility is introduced in the token bucket algorithm. In
the token bucket, algorithm tokens are generated at each tick (up to a certain limit). For
an incoming packet to be transmitted, it must capture a token and the transmission takes
place at the same rate. Hence some of the busty packets are transmitted at the same rate
if tokens are available and thus introduces some amount of flexibility in the system.

Formula: M * s = C + ? * s where S – is time taken M – Maximum output rate ? –


Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,

You might also like