Network 2
Network 2
NETWORK LAYER
What is a packet?
All data sent over the Internet is broken down into smaller chunks called
"packets." When Bob sends Alice a message, for instance, his message is broken
down into smaller pieces and then reassembled on Alice's computer. A packet has
two parts: the header, which contains information about the packet itself, and the
body, which is the actual data being sent.
At the network layer, networking software attaches a header to each packet when
the packet is sent out over the Internet, and on the other end, networking software
can use the header to understand how to handle the packet.
A header contains information about the content, source, and destination of each
packet (somewhat like stamping an envelope with a destination and return
address). For example, an IP header contains the destination IP address of each
packet, the total size of the packet, an indication of whether or not the packet has
been fragmented (broken up into still smaller pieces) in transit, and a count of how
many networks the packet has traveled through.
1. The network layer breaks the larger packets into small packets.
2. Connection services are provided including network layer flow control,
network layer error control, and packet sequence control.
1. LogicalAddressing
Physical addressing implemented by the data link layer handles the
problem of addressing locally. A header is added by the Network layer to the
packet coming from the upper layer that also includes logical addresses of
the sender and the receiver.
2. Routing
At the time when independent networks or links are connected together in
order to create internetworks/large network, then the routing devices(router
or switches) route the packets to their final destination. This is one of the
main functions of the network layer.
1.GuaranteeddeliveryofPackets
The network layer guarantees that the packet will reach its destination.
4.Security
Security is provided by the network layer by using a session key between the
source host and the destination host.
Given below are some benefits of services provided by the network layer:
By forwarding service of the network layer, the data packets are transferred
from one place to another in the network.
In order to reduce the traffic, the routers in the network layer create
collisions and broadcast the domains.
Failure in the data communication system gets eliminated by packetization.
A key design issue is determining how packets are routed from source to
destination. Routes can be based on static tables that are wired into the
network and rarely changed. They can also be highly dynamic, being
determined anew for each packet, to reflect the current network load.
If too many packets are present in the subnet at the same time, they will get
into one another's way, forming bottlenecks. The control of such
congestion also belongs to the network layer.
Moreover, the quality of service provided(delay, transmit time, jitter, etc) is
also a network layer issue.
When a packet has to travel from one network to another to get to its
destination, many problems can arise such as:
o The addressing used by the second network may be different from the
first one.
o The second one may not accept the packet at all because it is too
large.
o The protocols may differ, and so on.
It is up to the network layer to overcome all these problems to allow
heterogeneous networks to be interconnected.
Packetizing
2. Routing
Routing is the process of moving data from one device to another device. These
are two other services offered by the network layer. In a network, there are a
number of routes available from the source to the destination. The network layer
specifies some strategies which find out the best possible route. This process is
referred to as routing. There are a number of routing protocols that are used in
this process and they should be run to help the routers coordinate with each other
and help in establishing communication throughout the network.
Routing
3. Forwarding
Forwarding is simply defined as the action applied by each router when a packet
arrives at one of its interfaces. When a router receives a packet from one of its
attached networks, it needs to forward the packet to another attached network
(unicast routing) or to some attached networks (in the case of multicast routing).
Routers are used on the network for forwarding a packet from the local network
to the remote network. So, the process of routing involves packet forwarding
from an entry interface out to an exit interface.
Forwarding
Work is based on Forwarding Table. Checks the forwarding table and work
Routing Forwarding
according to that.
Here are some of the types of delays that can occur in packet switching:
Performance of a Network
The performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the network.
Finding the performance of a network depends on both quality of the network
and the quantity of the network.
Parameters for Measuring Network Performance
Bandwidth
Latency (Delay)
Bandwidth – Delay Product
Throughput
Jitter
BANDWIDTH
LATENCY
Note: Since the message is short and the bandwidth is high, the dominant factor
is the
propagation time and not the transmission time(which can be ignored).
Queuing Time
Queuing time is a time based on how long the packet has to sit around in the
router. Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes
with the load thrust in the network. In cases like these, the packet sits waiting,
ready to go, in a queue. These delays are predominantly characterized by the
measure of traffic on the system. The more the traffic, the more likely a packet is
stuck in the queue, just sitting in the memory, waiting.
Processing Delay
Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the
packet for transmission. These costs are predominantly based on the complexity
of the protocol. The router must decipher enough of the packet to make sense of
which queue to put the packet in. Typically the lower-level layers of the stack
have simpler protocols. If a router does not know which physical port to send the
packet to, it will send it to all the ports, queuing the packet in many queues
immediately. Differently, at a higher level, like in IP protocols, the processing
may include making an ARP request to find out the physical address of the
destination before queuing the packet for transmission. This situation may also
be considered as a processing delay.
Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at each
second, there are 3 bits on the line and the duration of each bit is 0.33s.
Bandwidth Delay
For both examples, the product of bandwidth and delay is the number of bits that
can fill the link. This estimation is significant in the event that we have to send
data in bursts and wait for the acknowledgment of each burst before sending the
following one. To utilize the maximum ability of the link, we have to make the
size of our burst twice the product of bandwidth and delay. Also, we need to fill
up the full-duplex channel. The sender ought to send a burst of data of
(2*bandwidth*delay) bits. The sender at that point waits for the receiver’s
acknowledgement for part of the burst before sending another burst. The amount:
2*bandwidth*delay is the number of bits that can be in transition at any time.
THROUGHPUT
JITTER
Jitter is another performance issue related to the delay. In technical terms, jitter
is a “packet delay variance”. It can simply mean that jitter is considered a
problem when different packets of data face different delays in a network and the
data at the receiver application is time-sensitive, i.e. audio or video data. Jitter is
measured in milliseconds(ms). It is defined as an interference in the normal order
of sending data packets. For example: if the delay for the first packet is 10 ms,
for the second is 35 ms, and for the third is 50 ms, then the real-time destination
application that uses the packets experiences jitter.
Simply, a jitter is any deviation in or displacement of, the signal pulses in a high-
frequency digital signal. The deviation can be in connection with the amplitude,
the width of the signal pulse, or the phase timing. The major causes of jitter are
electromagnetic interference(EMI) and crosstalk between signals. Jitter can lead
to the flickering of a display screen, affects the capability of a processor in a
desktop or server to proceed as expected, introduce clicks or other undesired
impacts in audio signals, and loss of transmitted data between network devices.
Jitter is harmful and causes network congestion and packet loss.
Congestion is like a traffic jam on the highway. Cars cannot move forward at
a reasonable speed in a traffic jam. Like a traffic jam, in congestion, all the
packets come to a junction at the same time. Nothing can get loaded.
The second negative effect is packet loss. When packets arrive at unexpected
intervals, the receiving system is not able to process the information, which
leads to missing information also called “packet loss”. This has negative
effects on video viewing. If a video becomes pixelated and is skipping, the
network is experiencing a jitter. The result of the jitter is packet loss. When
you are playing a game online, the effect of packet loss can be that a player
begins moving around on the screen randomly. Even worse, the game goes
from one scene to the next, skipping over part of the gameplay.
Jitter
In the above image, it can be noticed that the time it takes for packets to be sent
is not the same as the time in which they will arrive at the receiver side. One of
the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route traffic
along the most stable paths or to always pick the path that can come closest to
the targeted packet delivery rate.
Factors Affecting Network Performance
Below mentioned are the factors that affect the network performance.
Network Infrastrucutre
Applications used in the Network
Network Issues
Network Security
Network Infrastructure
Applications that are used in the Network can also have an impact on the
performance of the network as some applications that have poor performance can
take large bandwidth, for more complicated applications, its maintenance is also
important and therefore it impacts the performance of the network.
Network Issues
Answer:
Network Performance is measured in two ways: Bandwidth and Latency.
Answer:
There are five parameters to measure network performance.
Bandwidth
Throughput
Latency
Bandwidth Delay
Jitter
PROVIDED TRANSPORT LAYER
The services provided by the transport layer are similar to those of the data link
layer. The data link layer provides the services within a single network while the
transport layer provides the services across an internetwork made up of many
networks. The data link layer controls the physical layer while the transport layer
controls all the lower layers.
The services provided by the transport layer protocols can be divided into five
categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the
destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and
damaged packets.
The reliable delivery has four aspects:
o Error control
o Sequence control
o Loss control
o Duplication control
Error Control
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the
sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to
identify the missing segment.
Duplication Control
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets
and asking for the retransmission of packets. This increases network congestion
and thus, reducing the system performance. The transport layer is responsible for
flow control. It uses the sliding window protocol that makes the data transmission
more efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather than frame
oriented.
Multiplexing
o According to the layered model, the transport layer interacts with the
functions of the session layer. Many protocols combine session,
presentation, and application layer protocols into a single layer known as the
application layer. In these cases, delivery to the session layer means the
delivery to the application layer. Data generated by an application on one
machine must be transmitted to the correct application on another machine.
In this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station
or port. The port variable represents a particular TS user of a specified
station known as a Transport Service access point (TSAP). Each station has
only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.
IMPLEMENTATION OF CONNECTIONLESS SERVICES
Unreliable
Electronic Junk Mail, etc.
Datagram
Advantages :
It is very fast and also allows for multicast and broadcast operations in which
similar data are transferred to various recipients in a single transmission.
The effect of any error occurred can be reduced by implementing error-
correcting within an application protocol.
This service is very easy and simple and is also low overhead.
At the network layer, host software is very much simpler.
No authentication is required in this service.
Some of the application doesn’t even require sequential delivery of packets
or data. Examples include packet voice, etc.
Disadvantages :
This service is less reliable as compared to connection-oriented service.
It does not guarantee that there will be no loss, or error occurrence,
misdelivery, duplication, or out-of-sequence delivery of the packet.
They are more prone towards network congestions.
Instead, a route from the source machine to the destination machine is chosen as
part of the connection setup and stored in tables inside the routers, when a
connection is established. That route is utilised for all traffic flowing over the
connection, and exactly the same manner even telephone works.
Step 1 − Host H1 has established connection 1 with host H2, which is remembered
as the first entry in every routing table.
Step 2 − The first line of A’s infers when packet is having connection identifier 1
is coming from host H1 and has to be sent to router W and given connection
identifier as 1.
Step 3 − Similarly, the first entry at W routes the packet to Y, also with connection
identifier 1.
Step 5 − Note that we have a conflict here because although we can easily
distinguish connection 1 packets from H1 from connection 1 packet from H3, W
cannot do this.
Step 6 − For this reason, we assign a different connection identifier to the outgoing
traffic for the second connection. Avoiding conflicts of this kind is why routers
need the ability to replace connection identifiers in outgoing packets. In some
contexts, this is called label switching.
Operations :
There is a sequence of operations that are needed to b followed by users. These
operations are given below :
1. Establishing Connection –
It generally requires a session connection to be established just before any
data is transported or sent with a direct physical connection among sessions.
1. Transferring Data or Message –
When this session connection is established, then we transfer or send
message or data.
2. Releasing the Connection –
After sending or transferring data, we release connection.
Different Ways :
There are two ways in which connection-oriented services can be done. These
ways are given below :
1. Circuit-Switched Connection –
Circuit-switching networks or connections are generally known as
connection-oriented networks. In this connection, a dedicated route is being
established among sender and receiver, and whole data or message is sent
through it. A dedicated physical route or a path or a circuit is established
among all communication nodes, and after that, data stream or message is
sent or transferred.
2. Virtual Circuit-Switched Connection –
Virtual Circuit-Switched Connection or Virtual Circuit Switching is also
known as Connection-Oriented Switching. In this connection, a preplanned
route or path is established before data or messages are transferred or sent.
The message Is transferred over this network is such a way that it seems to
user that there is a dedicated route or path from source or sender to
destination or receiver.
Types of Connection-Oriented Service :
Service Example
Reliable Message
Sequence of pages, etc.
Stream
Advantages :
It kindly support for quality of service is an easy way.
This connection is more reliable than connectionless service.
Long and large messages can be divided into various smaller messages so
that it can fit inside packets.
Problems or issues that are related to duplicate data packets are made less
severe.
Disadvantages :
In this connection, cost is fixed no matter how traffic is.
It is necessary to have resource allocation before communication.
If any route or path failures or network congestions arise, there is no alternative
way available to continue communication.
COMPARISON OF VIRTUAL CIRCUIT AND DATA GRAM SUBNETS
Uses network-assisted
Uses end-to-end congestion
congestion control, where
control, where the sender
Congestion routers monitor network
adjusts its rate of
Control conditions and may drop
transmission based on
packets or send congestion
feedback from the network.
signals to the sender.
Provides reliable delivery of
Provides unreliable delivery of
packets by detecting and
Error Control packets and does not guarantee
retransmitting lost or
delivery or correctness.
corrupted packets.
Example
ATM, Frame Relay IP (Internet Protocol)
Protocol
Virtual Circuits:
1. It is connection-oriented, meaning that there is a reservation of resources like
buffers, CPU, bandwidth, etc. for the time in which the newly setup VC is
going to be used by a data transfer session.
2. The first sent packet reserves resources at each server along the path.
Subsequent packets will follow the same path as the first sent packet for the
connection time.
3. Since all the packets are going to follow the same path, a global header is
required. Only the first packet of the connection requires a global header, the
remaining packets generally don’t require global headers.
4. Since all packets follow a specific path, packets are received in order at the
destination.
5. Virtual Circuit Switching ensures that all packets successfully reach the
Destination. No packet will be discarded due to the unavailability of
resources.
6. From the above points, it can be concluded that Virtual Circuits are a highly
reliable method of data transfer.
7. The issue with virtual circuits is that each time a new connection is set up,
resources and extra information have to be reserved at every router along the
path, which becomes problematic if many clients are trying to reserve a
router’s resources simultaneously.
8. It is used by the ATM (Asynchronous Transfer Mode) Network, specifically
for Telephone calls.
Datagram Networks :
1. It is a connection-less service. There is no need for reservation of resources
as there is no dedicated path for a connection session.
2. All packets are free to use any available path. As a result, intermediate
routers calculate routes on the go due to dynamically changing routing tables
on routers.
3. Since every packet is free to choose any path, all packets must be associated
with a header with proper information about the source and the upper layer
data.
4. The connection-less property makes data packets reach the destination in any
order, which means that they can potentially be received out of order at the
receiver’s end.
5. Datagram networks are not as reliable as Virtual Circuits.
6. The major drawback of Datagram Packet switching is that a packet can only
be forwarded if resources such as the buffer, CPU, and bandwidth are
available. Otherwise, the packet will be discarded.
7. But it is always easy and cost-efficient to implement datagram networks as
there is no extra headache of reserving resources and making a dedicated
each time an application has to communicate.
8. It is generally used by the IP network, which is used for Data services like
the Internet.
IPV4 ADDRESS
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4
was the primary version brought into action for production within the
ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in decimal
notation.
Example- 192.0.2.126 could be an IPv4 address.
Parts of IPv4
Network part:
The network part indicates the distinctive variety that’s appointed to the
network. The network part conjointly identifies the category of the network
that’s assigned.
Host Part:
The host part uniquely identifies the machine on your network. This part of
the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host
half must vary.
Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive
numbers of hosts are divided into subnets and subnet numbers are appointed
to that.
Characteristics of IPv4
IPv4 could be a 32-Bit IP Address.
IPv4 could be a numeric address, and its bits are separated by a dot.
The number of header fields is twelve and the length of the header field is
twenty.
It has Unicast, broadcast, and multicast style of addresses.
IPv4 supports VLSM (Virtual Length Subnet Mask).
IPv4 uses the Post Address Resolution Protocol to map to the MAC address.
RIP may be a routing protocol supported by the routed daemon.
Networks ought to be designed either manually or with DHCP.
Packet fragmentation permits from routers and causing host.
Advantages of IPv4
IPv4 security permits encryption to keep up privacy and security.
IPV4 network allocation is significant and presently has quite 85000
practical routers.
It becomes easy to attach multiple devices across an outsized network while
not NAT.
This is a model of communication so provides quality service also as
economical knowledge transfer.
IPV4 addresses are redefined and permit flawless encoding.
Routing is a lot of scalable and economical as a result of addressing is
collective more effectively.
Data communication across the network becomes a lot of specific in
multicast organizations.
Limits net growth for existing users and hinders the use of the net
for brand new users.
Internet Routing is inefficient in IPv4.
IPv4 has high System Management prices and it’s labor-intensive,
complex, slow & frequent to errors.
Security features are nonobligatory.
Difficult to feature support for future desires as a result of adding it
on is extremely high overhead since it hinders the flexibility to
attach everything over IP.
Limitations of IPv4
IP relies on network layer addresses to identify end-points on network, and
each network has a unique IP address.
The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
If there are multiple host, we need IP addresses of next class.
Complex host and routing configuration, non-hierarchical addressing,
difficult to re-numbering addresses, large routing tables, non-trivial
implementations in providing security, QoS (Quality of Service), mobility
and multi-homing, multicasting etc. are the big limitation of IPv4 so that’s
why IPv6 came into the picture.
FORWARDING OF IP PACKETS
Routers are used on the network for forwarding a packet from the local network
to the remote network. So, the process of routing involves the packet forwarding
from an entry interface out to an exit interface.
Working:
The following steps are included in the packet forwarding in the router-
The router takes the arriving packet from an entry interface and then
forwards that packet to another interface.
The router needs to select the best possible interface for the packet to reach
the intended destination as there exist multiple interfaces in the router.
The forwarding decision is made by the router based on routing table entries.
The entries in the routing table comprise destination networks and exit
interfaces to which the packet is to be forwarded.
The selection of exit interface relies on- firstly, the interface must lead to the
target network to which the packet is intended to send, and secondly, it must
be the best possible path leading to the destination network.
Packet Forwarding Techniques:
Following are the packet forwarding techniques based on the destination host:
Next-Hop Method: By only maintaining the details of the next hop or next
router in the packet’s path, the next-hop approach reduces the size of the
routing table. The routing table maintained using this method does not have
the information regarding the whole route that the packet must take.
Network-Specific Method: In this method, the entries are not made for all
of the destination hosts in the router’s network. Rather, the entry is made of
the destination networks that are connected to the router.
Host-Specific Method: In this method, the routing table has the entries for
all of the destination hosts in the destination network. With the increase in
the size of the routing table, the efficiency of the routing table decreases. It
finds its application in the process of verification of route and security
purposes.
Default Method: Let’s assume- A host in network N1 is connected to two
routers, one of which (router R1) is connected to network N2 and the other
router R2 to the rest of the internet. As a result, the routing table only has
one default entry for the router R2.
INTERNET PROTOCOL
Internet Protocols are a set of rules that governs the communication and
exchange of data over the internet. Both the sender and receiver should follow
the same protocols in order to communicate the data. In order to understand it
better, let’s take an example of a language. Any language has its own set of
vocabulary and grammar which we need to know if we want to communicate in
that language. Similarly, over the internet whenever we access a website or
exchange some data with another device then these processes are governed by a
set of rules called the internet protocols.
Working of Internet Protocol
The internet and many other data networks work by organizing data into small
pieces called packets. Each large data sent between two network devices is
divided into smaller packets by the underlying hardware and software. Each
network protocol defines the rules for how its data packets must be organized in
specific ways according to the protocols the network supports.
Need of Protocols
It may be that the sender and receiver of data are parts of different networks,
located in different parts of the world having different data transfer rates. So, we
need protocols to manage the flow control of data, and access control of the link
being shared in the communication channel. Suppose there is a sender X who has
a data transmission rate of 10 Mbps. And, there is a receiver Y who has a data
receiving rate of 5Mbps. Since the rate of receiving the data is slow so some data
will be lost during transmission. In order to avoid this, receiver Y needs to
inform sender X about the speed mismatch so that sender X can adjust its
transmission rate. Similarly, the access control decides the node which will
access the link shared in the communication channel at a particular instant in
time. If not the transmitted data will collide if many computers send data
simultaneously through the same link resulting in the corruption or loss of data.
What is IP Addressing?
An IP address represents an Internet Protocol address. A unique address that
identifies the device over the network. It is almost like a set of rules governing
the structure of data sent over the Internet or through a local network. An IP
address helps the Internet to distinguish between different routers, computers,
and websites. It serves as a specific machine identifier in a specific network and
helps to improve visual communication between source and destination.
ICMPV4
1. Introduction
2. ICMPv4 Message Format
3. Types of ICMPv4 Messages
4. Key Takeaways
Introduction
As we are aware the IPv4 protocol doesn’t have any mechanism to report errors or
correct errors. So, IP functions in assistance with ICMP, report errors. ICMP
never gets involved in correcting the errors. Higher-level protocols take care of
correcting errors. Every time, ICMPv4 deliver an error report to the original source
of the datagram.
A router with a datagram for a host in another network, may not find the next hop
(router) to the final destination host.
Datagram’s time-to-live field has become zero.
There may be ambiguity in the header of the IP datagram.
It may happen that all the fragments of the datagram do not arrive within a time
limit to the destination host.
Though ICMP is a network layer protocol, its messages are not passed to the lower
layer (i.e. data link layer). Initially, the IP datagram encapsulates ICMP messages
and then they are passed to the lower layer.
Below we have the message format for the ICMPv4 message. It has an 8-byte
header and apart from this, it has a variable size data section. Though the header
format gets changed for each type of message. Still, the first 4 bytes of each
message remains the same.
Among these first 4 bytes, the first byte describes the ‘type‘ of the message.
The second byte clarifies the reason behind the ‘type’ of the message. The
next two bytes define the checksum field of the message.
The rest 4 bytes defines the rest of the header which is specific for each message
type. The data section varies according to the type of message. The error
reporting message’s data section holds the information to identify the original
datagram that has an error. The data section of the query message holds more
information regarding the type of query.
The most important function of ICMPv4 is to report the error. Although it is not
responsible to correct the errors. The higher-level protocols take the responsibility
of correcting the errors.
ICMPv4 always send the error report to the original source of the datagram. As
the datagram has only two addresses in its header:
1. Source address
2. Destination address.
So, ICMPv4 uses the source address for reporting the error.
ICMPv4 error messages are not generated in response to ICMP error messages. As
this can create infinite repetition.
The error message does not generate for the fragmented datagram. If the fragment
is not the first fragment.
ICMPv4 error message is not generated for the datagram having the special
address, 127.0.0.0 or 0.0.0.0.
This message is not generated for the datagrams with the broadcast address or a
multicast address in its destination field.
Destination Unreachable
Consider if a host or a router is unable to deliver or route the datagram. Then they
discard the datagram. And send a destination unreachable error message to the
original source host.
Refer to the image above, you will observe that the Type section of the destination
unreachable error message is ‘3’. The Code section defines the reasons for
discarding the message. For destination, unreachable message code ranges from 0-
15.
Unreachable Messages Codes
The destination host generates the destination unreachable error message with the
code as 2 or 3 . And the router generates a message with the rest of the codes.
Source Quench
The source quench error message informs the source that the datagram has
been discarded. Due to congestion in the router or destination host.
Time Exceeded
1. Code 0 – When this time-to-live field decrements to zero the router discards the
datagram. And send a time exceeded error message to the originating source of the
datagram.
2. Code 1 – If the destination host doesn’t receive all the fragments of a datagram in
a set time. Then it discards all the fragments and sends a time exceeded error
message to the source host.
Parameter Problem
If the destination host or the router find any ambiguity in the header of the IP
datagram. Then they discard the datagram. And send a parameter problem error
message to the originating source host of the datagram.
1. Code 0 defines that there is ambiguity in the header field of the datagram. And the
pointer field’s value points to the byte of the datagram header, which has a
problem.
2. Code 1 defines that the required part of the header is missing. Here, the pointer
field is not used.
Redirection
A router sends a redirection message to the localhost in the same network to update
its routing table. The router here does not discard the received datagram. Instead, it
forwards it to the appropriate router.
1. The message with this code 0 redirects for the network-specific route.
2. The message with this code 1 redirects for the host-specific route.
3. However the message with this code 2 redirects for the network-specific route for a
specific type of service.
4. And the message with this code 3 redirects for the host-specific route for a
specific type of service.
Query Messages
Query messages are for identifying network problems. Earlier there were five
query messages among which three are deprecated. The two query messages that
are being used today are:
Echo request and reply
When echo request and reply messages are exchanged from one host or a router to
another host or a router. It confirms that the two hosts or routers can communicate
with each other.
Timestamp request and reply messages calculate the round trip time. It is the time
required by an IP datagram to travel between two hosts or routers. This pair of
messages are also used for synchronizing the clocks of two machines (hosts or
routers).
Key Takeaways
In version 6 of the TCP/IP protocol suite ICMPv4 is also revised with addition to
version 6 of ICMPv6. The two internet debugging tools that utilize ICMPv4
are ping and traceroute.
MOBILE INTERNET PROTOCOL (OR MOBILE IP)
Mobile IP is a communication protocol (created by extending Internet
Protocol, IP) that allows the users to move from one network to another with the
same IP address. It ensures that the communication will continue without the
user’s sessions or connections being dropped.
Terminologies:
1. Mobile Node (MN) is the hand-held communication device that the user
carries e.g. Cell phone.
2. Home Network is a network to which the mobile node originally belongs as
per its assigned IP address (home address).
3. Home Agent (HA) is a router in-home network to which the mobile node
was originally connected
4. Home Address is the permanent IP address assigned to the mobile node
(within its home network).
5. Foreign Network is the current network to which the mobile node is visiting
(away from its home network).
6. Foreign Agent (FA) is a router in a foreign network to which the mobile
node is currently connected. The packets from the home agent are sent to the
foreign agent which delivers them to the mobile node.
7. Correspondent Node (CN) is a device on the internet communicating to the
mobile node.
8. Care-of Address (COA) is the temporary address used by a mobile node
while it is moving away from its home network.
9. Foreign agent COA, the COA could be located at the FA, i.e., the COA is
an IP address of the FA. The FA is the tunnel end-point and forwards packets
to the MN. Many MN using the FA can share this COA as a common COA.
10. Co-located COA, the COA is co-located if the MN temporarily acquired an
additional IP address which acts as COA. This address is now topologically
correct, and the tunnel endpoint is at the MN. Co-located addresses can be
acquired using services such as DHCP.
Mobile IP
Working:
The correspondent node sends the data to the mobile node. Data packets contain
the correspondent node’s address (Source) and home address (Destination).
Packets reach the home agent. But now mobile node is not in the home network,
it has moved into the foreign network. The foreign agent sends the care-of-
address to the home agent to which all the packets should be sent. Now, a tunnel
will be established between the home agent and the foreign agent by the process
of tunneling.
Tunneling establishes a virtual pipe for the packets available between a tunnel
entry and an endpoint. It is the process of sending a packet via a tunnel and it is
achieved by a mechanism called encapsulation.
Now, the home agent encapsulates the data packets into new packets in which
the source address is the home address and destination is the care-of-address and
sends it through the tunnel to the foreign agent. Foreign agent, on another side of
the tunnel, receives the data packets, decapsulates them, and sends them to the
mobile node. The mobile node in response to the data packets received sends a
reply in response to the foreign agent. The foreign agent directly sends the reply
to the correspondent node.
Key Mechanisms in Mobile IP:
1. Agent Discovery: Agents advertise their presence by periodically
broadcasting their agent advertisement messages. The mobile node receiving
the agent advertisement messages observes whether the message is from its
own home agent and determines whether it is in the home network or foreign
network.
2. Agent Registration: Mobile node after discovering the foreign agent sends a
registration request (RREQ) to the foreign agent. The foreign agent, in turn,
sends the registration request to the home agent with the care-of-address. The
home agent sends a registration reply (RREP) to the foreign agent. Then it
forwards the registration reply to the mobile node and completes the process
of registration.
3. Tunneling: It establishes a virtual pipe for the packets available between a
tunnel entry and an endpoint. It is the process of sending a packet via a
tunnel and it is achieved by a mechanism called encapsulation. It takes place
to forward an IP datagram from the home agent to the care-of-address.
Whenever the home agent receives a packet from the correspondent node, it
encapsulates the packet with source address as home address and destination
as care-of-address.
Route Optimization in Mobile IP:
The route optimization adds a conceptual data structure, the binding cache, to the
correspondent node. The binding cache contains bindings for the mobile node’s
home address and its current care-of-address. Every time the home agent
receives an IP datagram that is destined to a mobile node currently away from
the home network, it sends a binding update to the correspondent node to update
the information in the correspondent node’s binding cache. After this, the
correspondent node can directly tunnel packets to the mobile node. Mobile IP is
provided by the network providers.
Routing Algorithms–Distance Vector routing, Link State Routing, Path Vector Routing, Unicast
RoutingProtocol–Internet Structure, Routing Information Protocol, Open Source Path First, Border
GatewayProtocol V4, Broadcast routing, Multicasting routing, Multicasting Basics, Intradomain
Multicast Protocols, IGMP.
Routing Algorithms
Routing is the process of establishing the routes that data packets must follow
to reach the destination. In this process, a routing table is created which
contains information regarding routes that data packets follow. Various routing
algorithms are used for the purpose of deciding which route an incoming data
packet needs to be transmitted on to reach the destination efficiently.
1. Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network
topology or traffic load changes. The changes in routing decisions are reflected
in the topology as well as the traffic of the network. Also known as dynamic
routing, these make use of dynamic information such as current topology, load,
delay, etc. to select routes. Optimization parameters are distance, number of
hops, and estimated transit time.
Further, these are classified as follows:
Isolated: In this method each, node makes its routing decisions using the
information it has without seeking information from other nodes. The sending
nodes don’t have information about the status of a particular link. The
disadvantage is that packets may be sent through a congested network
which may result in delay. Examples: Hot potato routing, and backward
learning.
Random Walk
3. Hybrid Algorithms
As the name suggests, these algorithms are a combination of both adaptive and
non-adaptive algorithms. In this approach, the network is divided into several
regions, and each region uses a different algorithm.
Further, these are classified as follows:
Link-state: In this method, each router creates a detailed and complete map
of the network which is then shared with all other routers. This allows for
more accurate and efficient routing decisions to be made.
Distance vector: In this method, each router maintains a table that contains
information about the distance and direction to every other node in the
network. This table is then shared with other routers in the network. The
disadvantage of this method is that it may lead to routing loops.
Difference between Adaptive and Non-Adaptive
Routing Algorithms
The main difference between Adaptive and Non-Adaptive Algorithms is:
Adaptive Algorithms are the algorithms that change their routing decisions
whenever network topology or traffic load changes. It is called Dynamic
Routing. Adaptive Algorithm is used in a large amount of data, highly complex
network, and rerouting of data.
Non-Adaptive Algorithms are algorithms that do not change their routing
decisions once they have been selected. It is also called static Routing. Non-
Adaptive Algorithm is used in case of a small amount of data and a less
complex network.
For more differences, you can refer to Differences between Adaptive and Non-
Adaptive Routing Algorithms.
Difference between Routing and Flooding
The difference between Routing and Flooding is listed below:
Routing Flooding
May give the shortest path. Always gives the shortest path.
distance-vector routing
A distance-vector routing (DVR) protocol requires that a router inform its
neighbors of topology changes periodically. Historically known as the old
ARPANET routing algorithm (or known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table
containing the distance between itself and ALL possible destination nodes.
Distances,based on a chosen metric, are computed using information from the
neighbors’ distance vectors.
Information kept by DV router -
Each router has an ID
Associated with each link connected to a router,
there is a link cost (static or dynamic).
Intermediate hops
Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ? N
As we can see that distance will be less going from X to Z when Y is
intermediate node(hop) so it will be update in routing table X.
Similarly for Z also –
Unicast means the transmission from a single sender to a single receiver. It is a point-to-point
communication between the sender and receiver. There are various unicast protocols such as
TCP, HTTP, etc.
TCP is the most commonly used unicast protocol. It is a connection-oriented protocol that relies
on acknowledgment from the receiver side.
HTTP stands for HyperText Transfer Protocol. It is an object-oriented protocol for
communication.
STEP 1: The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF,
INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with a minimum distance
value. The vertex 0 is picked and included in sptSet. So sptSet becomes {0}. After including 0 to
sptSet, update the distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7. The
distance values of 1 and 7 are updated as 4 and 8.
The following subgraph shows vertices and their distance values. Vertices included in SPT are
included in GREEN color.
STEP 2: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}. Update the
distance values of adjacent vertices of 1. The distance value of vertex 2 becomes 12.
STEP 3: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values of
adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).
STEP 4: Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance values of
adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.
We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we get the
following Shortest Path Tree (SPT).
Fig. 2 briefly describes the path framework related to that message from “Your computer” to another
computer.
Fig. 3 is the brief description of Fig. 1, where physical connection via telephone to ISP is easy to
guess, but it also requires some explanation. The ISP manages a pool of modems; this is handled
by some device that contains data flow from the modem pool to a spine or specific line router
(usually a specialized one). This configuration can be linked to a port server, as it offers network
connectivity. Information on billing and use is typically obtained here as well.
When the packets cross the phone framework and nearby ISP equipment, it is redirected to the
ISP’s mainline or infrastructure from which the ISP purchases bandwidth. The packets typically pass
from here to many routers, backbones, unique lines, and other networks to reach the target, the
device with another computer’s address.
For the users of Microsoft Windows and operators of Unix flavour, if you have an internet
connection, you can trace the packets’ path by using an internet program known as traceroute. Just
like the PING command, users can check the packet path in the command prompt. Traceroute prints
all the routers, computer systems, and various other entities related to the internet from the packet
will travel and reach the actual destination. The internet routers decide further communication of
packets. Multiple packets are shown in Fig. 3; the real cause of such routers is to understand the
networking structures clearly.
The infrastructure of the internet:
The framework of the internet consists of multiple interconnected large networks. The large networks
we called Network Service Providers (NSPs). UUNet, IBM, CerfNet, SprintNet, PSINet are well-
known examples of NSPs. Packet traffic is exchanged among these peer networks. Each of the
NSPs needs to be connected to three Network Access Points (NAPs). In NAPs traffic, packets have
the provision to jump from one NSP to the backbone of another NSP. Metropolitan Area Exchange
(MAEs) are also interconnected utilizing NSPs. MAEs and NAPs have the same functionality;
however, MAEs are owned privately.
Doesn’t support
Supports authentication of
authentication of updated –
RIPv2 update messages
messages
Consider the above-given topology which has 3-routers R1, R2, R3. R1 has IP
address 172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address
172.16.10.2/30 on s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address
172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0, 10.10.10.1/24 on fa0/0.
Configure RIP for R1 :
R1(config)# router rip
R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note: no auto-summary command disables the auto-summarisation. If we don’t
select any auto-summary, then the subnet mask will be considered as classful
in Version 1.
Configuring RIP for R2:
R2(config)# router rip
R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary
Similarly, Configure RIP for R3 :
R3(config)# router rip
R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary
RIP timers:
Update timer: The default timing for routing information being exchanged by
the routers operating RIP is 30 seconds. Using an Update timer, the routers
exchange their routing table periodically.
Invalid timer: If no update comes until 180 seconds, then the destination
router considers it invalid. In this scenario, the destination router mark hop
counts as 16 for that router.
Hold down timer: This is the time for which the router waits for a neighbor
router to respond. If the router isn’t able to respond within a given time then
it is declared dead. It is 180 seconds by default.
Flush time: It is the time after which the entry of the route will be flushed if it
doesn’t respond within the flush time. It is 60 seconds by default. This timer
starts after the route has been declared invalid and after 60 seconds i.e time
will be 180 + 60 = 240 seconds.
Note that all these times are adjustable. Use this command to change the
timers :
R1(config-router)# timers basic
R1(config-router)# timers basic 20 80 80 90
Normal utilization of RIP:
1. Small to medium-sized networks: RIP is normally utilized in little to
medium-sized networks that have moderately basic directing prerequisites. It
is not difficult to design and requires little support, which goes with it a
famous decision for little organizations.
2. Legacy organizations: RIP is as yet utilized in some heritage networks that
were set up before further developed steering conventions were created.
These organizations may not merit the expense and exertion of overhauling,
so they keep on involving RIP as their directing convention.
3. Lab conditions: RIP is much of the time utilized in lab conditions for testing
and learning purposes. A basic convention is not difficult to set up, which
pursues it a decent decision for instructive purposes.
4. Backup or repetitive steering: In certain organizations, RIP might be
utilized as a reinforcement or excess directing convention, on the off chance
that the essential steering convention falls flat or encounters issues. RIP isn’t
generally so productive as other directing conventions, however, it very well
may be helpful as a reinforcement if there should be an occurrence of crisis.
Advantages of RIP :
Simplicity: RIP is a relatively simple protocol to configure and manage,
making it an ideal choice for small to medium-sized networks with limited
resources.
Easy implementation: RIP is easy to implement, as it does not require
much technical expertise to set up and maintain.
Convergence: RIP is known for its fast convergence time, meaning that it
can quickly adapt to changes in network topology and route packets
efficiently.
Automatic updates: RIP automatically updates routing tables at regular
intervals, ensuring that the most up-to-date information is being used to
route packets.
Low bandwidth overhead: RIP uses a relatively low amount of bandwidth
to exchange routing information, making it an ideal choice for networks with
limited bandwidth.
Compatibility: RIP is compatible with many different types of routers and
network devices, making it easy to integrate into existing networks.
Disadvantages of RIP :
Limited scalability: RIP has limited scalability, and it may not be the best
choice for larger networks with complex topologies. RIP can only support up
to 15 hops, which may not be sufficient for larger networks.
Slow convergence: While RIP is known for its fast convergence time, it can
be slower to converge than other routing protocols. This can lead to delays
and inefficiencies in network performance.
Routing loops: RIP can sometimes create routing loops, which can cause
network congestion and reduce overall network performance.
Limited support for load balancing: RIP does not support sophisticated
load balancing, which can result in suboptimal routing paths and uneven
network traffic distribution.
Security vulnerabilities: RIP does not provide any native security features,
making it vulnerable to attacks such as spoofing and tampering.
Inefficient use of bandwidth: RIP uses a lot of bandwidth for periodic
updates, which can be inefficient in networks with limited bandwidth.
Open Shortest Path First (OSPF) protocol
States
Open Shortest Path First (OSPF) is a link-state routing protocol that is used to
find the best path between the source and the destination router using its own
Shortest Path First). OSPF is developed by Internet Engineering Task Force
(IETF) as one of the Interior Gateway Protocol (IGP), i.e, the protocol which
aims at moving the packet within a large autonomous system or routing domain.
It is a network layer protocol which works on protocol number 89 and uses AD
value 110. OSPF uses multicast address 224.0.0.5 for normal communication
and 224.0.0.6 for update to designated router(DR)/Backup Designated Router
(BDR).
OSPF Terms
OSPF States
The device operating OSPF goes through certain states. These states are:
Down – In this state, no hello packets have been received on the interface.
Note – The Downstate doesn’t mean that the interface is physically
down. Here, it means that the OSPF adjacency process has not
started yet.
INIT – In this state, the hello packets have been received from the other
router.
2WAY – In the 2WAY state, both the routers have received the hello packets
from other routers. Bidirectional connectivity has been established.
Note – In between the 2WAY state and Exstart state, the DR and
BDR election takes place.
Exstart – In this state, NULL DBD are exchanged. In this state, the master
and slave elections take place. The router having the higher router I’d
become the master while the other becomes the slave. This election decides
Which router will send its DBD first (routers who have formed neighbourship
will take part in this election).
Exchange – In this state, the actual DBDs are exchanged.
Loading – In this state, LSR, LSU, and LSA (Link State Acknowledgement)
are exchanged.
Important – When a router receives DBD from other router, it compares its
own DBD with the other router DBD. If the received DBD is more updated
than its own DBD then the router will send LSR to the other router stating
what links are needed. The other router replies with the LSU containing the
updates that are needed. In return to this, the router replies with the Link
State Acknowledgement.
Full – In this state, synchronization of all the information takes place. OSPF
routing can begin only after the Full state.
Broadcast Routing
Broadcast routing plays a role, in computer networking and
telecommunications. It involves transmitting data, messages, or signals from
one source to destinations within a network. Unlike routing (one-to-one
communication) or multicast routing (one-to-many communication) broadcast
routing ensures that information reaches all devices or nodes within the
network.
In this article, we will explore the world of broadcast routing in today’s era of
communication.
Broadcast
This protocol is not designed for WANs with a large number of nodes, since it can support
only a limited number of hops for a packet. This is clearly considered a drawback of this
protocol, as it causes packet discarding if a packet does not reach its destination within the
maximum number of hops set. Another constraint in this protocol is the periodic expansion
of a multicast tree, leading to periodic broadcasting of multicast data. This constraint in turn
causes the issue of periodic broadcast of routing tables, which consumes a large available
bandwidth. DVMRP supports only the source-based multicast tree. Thus, this protocol is
appropriate for multicasting data among a limited number of distributed receivers located
close to the source.
When it sends an IGMP message to its local multicast router, a network host it in fact
identifies the group membership. Routers are usually sensitive to these types of messages
and periodically send out queries to discover which subnet groups are active. When a host
wants to join a group, the group's multicast address receives an IGMP message stating the
group membership. The local multicast router receives this message and constructs all
routes by propagating the group membership information to other multicast routers
throughout the network.
The IGMP packet format has several versions; Figure 15.4 shows version 3. The first 8 bits
indicate the message type : which may be one of membership query , membership report
v 1 , membership report v 2 , leave group , and membership report v 3 . Hosts send IGMP
membership reports corresponding to a particular multicast group, expressing an interest in
joining that group.
IGMP is compatible with TCP/IP, so the TCP/IP stack running on a host forwards the IGMP
membership report when an application opens a multicast socket. A router periodically
transmits an IGMP membership query to verify that at least one host on the subnet is still
interested in receiving traffic directed to that group. In this case, if no response to three
consecutive IGMP membership queries is received, the router times out the group and
stops forwarding traffic directed toward that group. (Note that v i refers to the version of the
protocol the membership report belongs to.)
IGMP version 3 supports include and exclude modes. In include mode, a receiver
announces the membership to a host group and provides a list of source addresses from
which it wants to receive traffic. With exclude mode , a receiver expresses the membership
to a multicast group and provides a list of source addresses from which it does not want to
receive traffic. With the leave group message, hosts can report to the local multicast router
that they intend to leave the group. If any remaining hosts are interested in receiving the
traffic, the router transmits a group-specific query. In this case, if the router receives no
response, the router times out the group and stops forwarding the traffic.
The next 8 bits, max response time , are used to indicate the time before sending a
response report (default, 10 seconds). The Resv field is set to 0 and is reserved for future
development. The next field is S flag, to suppress router-side processing. QRV indicates the
querier's robustness variable. QQIC is the querier's query interval code. N shows the
number of sources, and Source Address [ i ] provides a vector of N individual IP addresses.
Link-State Multicast
As explained in Chapter 7, link-state routing occurs when a node in a network has to obtain
the state of its connected links and then send an update to all the other routers once the
state changes. On receipt of the routing information, each router reconfigures the entire
topology of the network. The link-state routing algorithm uses Dijkstra's algorithm to
compute the least-cost path.
The multicast feature can be added to a router using the link-state routing algorithm by
placing the spanning tree root at the router. The router uses the tree to identify the best next
node. To include the capability of multicast support, the link-state router needs the set of
groups that have members on a particular link added to the state for that link. Therefore,
each LAN attached to the network must have its host periodically announce all groups it
belongs to. This way, a router simply detects such announcements from LANs and updates
its routing table.
Details of MOSPF
With OSPF, a router uses the flooding technique to send a packet to all routers within the
same hierarchical area and to all other network routers. This simply allows all MOSPF
routers in an area to have the same view of group membership. Once a link-state table is
created, the router computes the shortest path to each multicast member by using Dijkstra's
algorithm. This protocol for a domain with n LANs is summarized as follows .
R i = Router attached to N i
MOSPF adds a link-state field, mainly membership information about the group of LANs
needing to receive the multicast packet. MOSPF also uses Dijkstra's algorithm and
calculates a shortest-path multicast tree. MOSPF does not flood multicast traffic everywhere
to create state. Dijkstra's algorithm must be rerun when group membership changes.
MOSPF does not support sparse-mode tree (shared-tree) algorithm. Each OSPF router
builds the unicast routing topology, and each MOSPF router can build the shortest-path tree
for each source and group. Group-membership reports are broadcast throughout the OSPF
area. MOSPF is a dense-mode protocol, and the membership information is broadcast to all
MOSPF routers. Note that frequent broadcasting of membership information degrades
network performance.
Example.In Figure 15.5, each of the five LANs in a certain domain has an associated
router. Show an example of MOSPF for multicasting from router R 1 to seven
servers located in LANs 1, 2, and 3.
Figure 15.5. Use of MOSPF to multicast from router R 1 to seven servers located
in two different multicast groups spread over three LANs
Solution.Multicast groups 1 and 2 are formed . For group 1, the best tree is implemented
using copying root R 4 . For group 2, the best tree is implemented using copying
root R 7 .
15.2.4. Protocol-Independent Multicast (PIM)
Protocol-independent multicast (PIM) is an excellent multicast protocol for networks,
regardless of size and membership density. PIM is "independent" because it implements
multicasting independently of any routing protocol entering into the multicast routing
information database. PIM can operate in both dense mode and sparse mode . Dense-
mode is a flood-and-prune protocol and is best suited for networks densely populated by
receivers and with enough bandwidth. This version of PIM is comparable to DVMRP.
Sparse-mode PIM typically offers better stability because of the way it establishes trees.
This version assumes that systems can be located far away from one another and that
group members are " sparsely " distributed across the system. This protocol for a domain
with n LANs is summarized as follows.
Sparse-mode PIM first creates a shared tree, followed by one or more source-specific trees
if there is enough traffic demand. Routers join and leave the multicast
group join and prune by protocol messages. These messages are sent to a rendezvous
router allocated to each multicast group. The rendezvous point is chosen by all the routers
in a group and is treated as a meeting place for sources and receivers. Receivers join the
shared tree, and sources register with that rendezvous router. Note that the shortest-path
algorithm made by the unicast routing is used in this protocol for the construction of trees.
Each source registers with its rendezvous router and sends a single copy of multicast data
through it to all registered receivers. Group members register to receive data through the
rendezvous router. This registration process triggers the shortest-path tree between the
source and the rendezvous router. Each source transmits multicast data packets
encapsulated in unicast packets to the rendezvous router. If receivers have joined the group
in the rendezvous router, the encapsulation is stripped off the packet, and it is sent on the
shared tree. This kind of multicast forwarding state between the source and the rendezvous
router enables the rendezvous router to receive the multicast traffic from the source and to
avoid the overhead of encapsulation.
Example.In Figure 15.6, the five LANs in a certain domain each have an associated router.
Show an example of sparse-mode PIM for multicasting from router R 1 to four
servers located in a multicast group spread over LANs 2 and 3.
Figure 15.6. Sparse-mode PIM for multicasting from router R 1 to four servers in
a multicast group spread over LANs 2 and 3
Solution.Multicasting is formed with R 3 the associated rendezvous router for this group.
Thus, the group uses a reverse unicast path as shown to update the rendezvous
router of any joins and leaves . For this group, the multicast tree is implemented
using copying root R 7 .
15.2.5. Core-Based Trees (CBT) Protocol
In sparse mode, forwarding traffic to the rendezvous router and then to receivers causes
delay and longer paths. This issue can be partially solved in the core-based tree (CBT)
protocol, which uses bidirectional trees. Sparse-mode PIM is comparable to CBT but with
two differences. First, CBT uses bidirectional shared trees, whereas sparse-mode PIM uses
unidirectional shared trees. Clearly, bidirectional shared trees are more efficient when
packets move from a source to the root of the multicast tree; as a result, packets can be
sent up and down in the tree. Second, CBT uses only a shared tree and does not use
shortest-path trees.
CBT is comparable to DVMRP in WANs and uses the basic sparse-mode paradigm to
create a single shared tree used by all sources. As with the shared-trees algorithm, the tree
must be rooted at a core point. All sources send their data to the core point, and all
receivers send explicit join messages to the core. In contrast, DVMRP is costly, since it
broadcasts packets, causing each participating router to become overwhelmed and thereby
requiring keeping track of every source-group pair. CBT's bidirectional routing takes into
account the current group membership when it is being established.
When broadcasting occurs, it results in traffic concentration on a single link, since CBT has
a single delivery tree for each group. Although it is designed for intradomain multicast
routing, this protocol is appropriate for interdomain applications as well. However, it can
suffer from latency issues, as in most cases, the traffic must flow through the core router.
This type of router is a bottleneck if its location is not carefully selected, especially when
transmitters and receivers are far apart.
ss
IPv6 Addressing, IPv6 Protocol, Transition from IPv4 to IPv6.Transport Layer
Services, connectionless versus connection oriented protocols. Transport Layer
Protocols: Simple Protocol, Stop and Wal, Go–Back–N, Selective repeat, Piggy
Backing. UDP: User datagram, Services, Applications. TCP: TCP services, TCP
features, segment, A TCP connection, Flow control, error control, congestion control.
IPv6 Addressing
IPv6 (Internet Protocol version 6) introduces significant changes in addressing compared
to its predecessor, IPv4. Here are the key aspects of IPv6 addressing:
1. Length:
IPv6 addresses are 128 bits long, compared to the 32-bit addresses in IPv4.
Written in hexadecimal, each digit represents 4 bits.
2. Notation:
IPv6 addresses are typically written as eight groups of four hexadecimal digits
separated by colons (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334).
3. Blocks and Groups:
IPv6 addresses are divided into eight 16-bit blocks.
These blocks are separated by colons.
4. Compression:
Consecutive blocks of zeros can be compressed with :: (double colons) once in an
IPv6 address.
5. Address Types:
Unicast Address: Identifies a single interface.
Multicast Address: Used to send data to multiple interfaces.
Anycast Address: Assigned to multiple interfaces, and the data is sent to the
nearest one.
IPv6 addresses are designed to accommodate the growing number of devices on the
Internet and overcome the limitations of IPv4 addressing. The expanded address space
and improved features make IPv6 a crucial part of the future of networking.
ADDRESSING
128-Bit Addresses:
IPv6 addresses are 128 bits long, providing an exponentially larger address space
compared to IPv4's 32-bit addresses.
Hexadecimal Notation:
IPv6 addresses are represented in hexadecimal, making them more human-
readable.
Blocks and Colons:
IPv6 addresses are divided into eight 16-bit blocks, separated by colons.
Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
Zero Compression:
Consecutive blocks of zeros can be compressed with ::, but this can only be used
once in an address.
2. Types of Addresses:
Unicast:
Identifies a single network interface.
Global unicast, link-local, and unique local are common types.
Multicast:
Used to send data to multiple recipients simultaneously.
Starts with the prefix ff00::/8.
Anycast:
Assigned to multiple interfaces, and the data is delivered to the nearest
(topologically closest) one.
3. Address Configuration:
4. Router Advertisements:
Router Discovery:
Routers periodically send Router Advertisement (RA) messages to announce their
presence on the network.
Prefix Information:
Routers include prefix information in RAs, which hosts use to form their IPv6
addresses.
5. Header Simplification:
Simplified Header:
The IPv6 header is simpler than IPv4, reducing processing overhead on routers
and switches.
No Broadcast:
IPv6 eliminates broadcast communication in favor of multicast and anycast.
6. IPsec Integration:
IPsec Support:
IPv6 includes IPsec (Internet Protocol Security) as a mandatory part of the
protocol suite, enhancing security features.
7. Transition Mechanisms:
Dual-Stack:
Both IPv4 and IPv6 coexist on the same network during the transition period.
Tunneling:
IPv6 packets are encapsulated within IPv4 packets for transmission over IPv4
networks.
IPv6 is gradually being adopted globally to accommodate the growing number of devices
connected to the internet and overcome the limitations posed by IPv4 address exhaustion.
The transition to IPv6 is an essential step for the continued expansion and sustainability
of the internet.
In the current scenario, the IPv4 address is exhausted and IPv6 had come to overcome
the limit.
Various organization is currently working with IPv4 technology and in one day we
can’t switch directly from IPv4 to IPv6. Instead of only using IPv6, we use combination
of both and transition means not replacing IPv4 but co-existing of both.
When we want to send a request from an IPv4 address to an IPv6 address, but it isn’t
possible because IPv4 and IPv6 transition is not compatible. For a solution to this
problem, we use some technologies. These technologies are Dual Stack Routers,
Tunneling, and NAT Protocol Translation. These are explained as following below.
1. Dual-Stack Routers:
In dual-stack router, A router’s interface is attached with IPv4 and IPv6 addresses
configured are used in order to transition from IPv4 to IPv6.
In this above diagram, A given server with both IPv4 and IPv6 addresses configured
can communicate with all hosts of IPv4 and IPv6 via dual-stack router (DSR). The dual
stack router (DSR) gives the path for all the hosts to communicate with the server
without changing their IP addresses.
2. Tunneling:
Tunneling is used as a medium to communicate the transit network with the different IP
versions.
In this above diagram, the different IP versions such as IPv4 and IPv6 are present. The
IPv4 networks can communicate with the transit or intermediate network on IPv6 with
the help of the Tunnel. It’s also possible that the IPv6 network can also communicate
with IPv4 networks with the help of a Tunnel.
3. NAT Protocol Translation:
With the help of the NAT Protocol Translation technique, the IPv4 and IPv6 networks
can also communicate with each other which do not understand the address of different
IP version.
Generally, an IP version doesn’t understand the address of different IP version, for the
solution of this problem we use NAT-PT device which removes the header of first
(sender) IP version address and add the second (receiver) IP version address so that the
Receiver IP version address understand that the request is sent by the same IP version,
and its vice-versa is also possible.
In the above diagram, an IPv4 address communicates with the IPv6 address via a NAT-
PT device to communicate easily. In this situation, the IPv6 address understands that
the request is sent by the same IP version (IPv6) and it responds.
The transport Layer is the second layer in the TCP/IP model and the fourth layer in
the OSI model. It is an end-to-end layer used to deliver messages to a host. It is termed
an end-to-end layer because it provides a point-to-point connection rather than hop-to-
hop, between the source host and destination host to deliver the services reliably. The
unit of data encapsulation in the Transport Layer is a segment.
Working of Transport Layer
The transport layer takes services from the Application layer and provides services to
the Network layer.
At the sender’s side: The transport layer receives data (message) from the Application
layer and then performs Segmentation, divides the actual message into segments, adds
the source and destination’s port numbers into the header of the segment, and transfers
the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and
forwards the message to the appropriate port in the Application layer.
Responsibilities of a Transport Layer
The Process to Process Delivery
End-to-End Connection between Hosts
Multiplexing and Demultiplexing
Congestion Control
Data integrity and Error correction
Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source-destination hosts to correctly
deliver a frame and the Network layer requires the IP address for appropriate routing of
packets, in a similar way Transport Layer requires a Port number to correctly deliver
the segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16-bit address used to identify any client-server
program uniquely.
Process to Process Delivery
4. Congestion Control
Congestion is a situation in which too many sources over a network attempt to send
data and the router buffers start overflowing due to which loss of packets occurs. As a
result, the retransmission of packets from the sources increases the congestion further.
In this situation, the Transport layer provides Congestion Control in different ways. It
uses open-loop congestion control to prevent congestion and closed-loop congestion
control to remove the congestion in a network once it occurred. TCP provides AIMD –
additive increases multiplicative decrease and leaky bucket technique for congestion
control.
Both Connection-oriented service and Connection-less service are used for the
connection establishment between two or more two devices. These types of services are
offered by the network layer.
Connection-oriented service is related to the telephone system. It includes connection
establishment and connection termination. In a connection-oriented service, the
Handshake method is used to establish the connection between sender and receiver.
Connection-less service is related to the postal system. It does not include any
connection establishment and connection termination. Connection-less Service does not
give a guarantee of reliability. In this, Packets do not follow the same path to reach their
destination.
9. Ex: TCP (Transmission Control Protocol) Ex: UDP (User Datagram Protocol)
The transport layer is the fourth layer in the OSI model and the second layer in the
TCP/IP model. The transport layer provides with end to end connection between the
source and the destination and reliable delivery of the services. Therefore transport
layer is known as the end-to-end layer. The transport layer takes the services from its
upward layer which is the application layer and provides it to the network layer.
Segment is the unit of data encapsulation at the transport layer.
Functions of Transport Layer
Process to process delivery
End-to-end connection between devices
Multiplexing and Demultiplexing
Data integrity and error Correction
Congestion Control
Flow Control
Transport Layer Protocols
The transport layer is represented majorly by TCP and UDP protocols. Today almost all
operating systems support multiprocessing multi-user environments. This transport
layer protocol provides connections to the individual ports. These ports are known as
protocol ports. Transport layer protocols work above the IP protocols and deliver the
data packets from IP serves to destination port and from the originating port to
destination IP services. Below are the protocols used at the transport layer.
1. UDP
UDP stands for User Datagram Protocol. User Datagram Protocol provides a
nonsequential transmission of data. It is a connectionless transport protocol. UDP
protocol is used in applications where the speed and size of data transmitted is
considered as more important than the security and reliability. User Datagram is defined
as a packet produced by User Datagram Protocol. UDP protocol adds checksum error
control, transport level addresses, and information of length to the data received from
the layer above it. Services provided by User Datagram Protocol(UDP) are
connectionless service, faster delivery of messages, checksum, and process-to-process
communication.
Advantages of UDP
UDP also provides multicast and broadcast transmission of data.
UDP protocol is preferred more for small transactions such as DNS lookup.
It is a connectionless protocol, therefore there is no compulsion to have a
connection-oriented network.
UDP provides fast delivery of messages.
Disadvantages of UDP
In UDP protocol there is no guarantee that the packet is delivered.
UDP protocol suffers from worse packet loss.
UDP protocol has no congestion control mechanism.
UDP protocol does not provide the sequential transmission of data.
2. TCP
TCP stands for Transmission Control Protocol. TCP protocol provides transport layer
services to applications. TCP protocol is a connection-oriented protocol. A secured
connection is being established between the sender and the receiver. For a generation of
a secured connection, a virtual circuit is generated between the sender and the receiver.
The data transmitted by TCP protocol is in the form of continuous byte streams. A
unique sequence number is assigned to each byte. With the help of this unique number,
a positive acknowledgment is received from receipt. If the acknowledgment is not
received within a specific period the data is retransmitted to the specified destination.
Advantages of TCP
TCP supports multiple routing protocols.
TCP protocol operates independently of that of the operating system.
TCP protocol provides the features of error control and flow control.
TCP provides a connection-oriented protocol and provides the delivery of data.
Disadvantages of TCP
TCP protocol cannot be used for broadcast or multicast transmission.
TCP protocol has no block boundaries.
No clear separation is being offered by TCP protocol between its interface, services,
and protocols.
In TCP/IP replacement of protocol is difficult.
3. SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a connection-oriented
protocol. Stream Control Transmission Protocol transmits the data from sender to
receiver in full duplex mode. SCTP is a unicast protocol that provides with point to
point-to-point connection and uses different hosts for reaching the destination. SCTP
protocol provides a simpler way to build a connection over a wireless network. SCTP
protocol provides a reliable transmission of data. SCTP provides a reliable and easier
telephone conversation over the internet. SCTP protocol supports the feature of
multihoming ie. it can establish more than one connection path between the two points
of communication and does not depend on the IP layer. SCTP protocol also
Advantages of SCTP
SCTP provides a full duplex connection. It can send and receive the data
simultaneously.
SCTP protocol possesses the properties of both TCP and UDP protocol.
SCTP protocol does not depend on the IP layer.
SCTP is a secure protocol.
Disadvantages of SCTP
To handle multiple streams simultaneously the applications need to be modified
accordingly.
The transport stack on the node needs to be changed for the SCTP protocol.
Modification is required in applications if SCTP is used instead of TCP or UDP
protocol.
Useful Terms:
Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
Problems :
1. Lost Data
2. Lost Acknowledgement:
1. Time Out:
3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.
Selective Repeat
Why Selective Repeat Protocol? The go-back-n protocol works well if errors are less,
but if the line is poor it wastes a lot of bandwidth on retransmitted frames. An
alternative strategy, the selective repeat protocol, is to allow the receiver to accept and
buffer the frames following a damaged or lost one. Selective Repeat attempts to
retransmit only those packets that are actually lost (due to errors) :
Receiver must be able to accept packets out of order.
Since receiver must release packets to higher layer in order, the receiver must be
able to buffer some packets.
Retransmission requests :
Implicit – The receiver acknowledges every good packet, packets that are not
ACKed before a time-out are assumed lost or in error.Notice that this approach must
be used to be sure that every packet is eventually received.
Explicit – An explicit NAK (selective reject) can request retransmission of just one
packet. This approach can expedite the retransmission but is not strictly needed.
One or both approaches are used in practice.
Selective Repeat Protocol (SRP) : This protocol(SRP) is mostly identical to GBN
protocol, except that buffers are used and the receiver, and the sender, each maintains a
window of size. SRP works better when the link is very unreliable. Because in this case,
retransmission tends to happen more frequently, selectively retransmitting frames is
more efficient than retransmitting all of them. SRP also requires full-duplex link.
backward acknowledgements are also in progress.
Sender’s Windows ( Ws) = Receiver’s Windows ( Wr).
Window size should be less than or equal to half the sequence number in SR
protocol. This is to avoid packets being recognized incorrectly. If the size of the
window is greater than half the sequence number space, then if an ACK is lost, the
sender may send new packets that the receiver believes are retransmissions.
Sender can transmit new packets as long as their number is with W of all unACKed
packets.
Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if NAK is
employed.
Receiver ACKs all correct packets.
Receiver stores correct packets until they can be delivered in order to the higher
layer.
In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2^m.
Figure – the sender only retransmits frames, for which a NAK is receivedEfficiency of
Selective Repeat Protocol (SRP) is same as GO-Back-N’s efficiency :
Efficiency = N/(1+2a)
Where a = Propagation delay / Transmission delay
Buffers = N + N
Sequence number = N(sender side) + N ( Receiver Side)
if Tt(ack) :Transmission delay for acknowledgment , Tq: Queuing delay and Tpro:
Processing delay is mention
We know that the Efficiency (?),
=Useful time / Total cycle time
=Tt(data) /Tt(data) + 2*Tp + Tq + Tpro + Tt(ack)
Tt(data) : Transmission delay for Data packet
Tp : propagation delay for Data packet
Tq: Queuing delay
Tpro: Processing delay
Tt(ack): Transmission delay for acknowledgment
Above formula is applicable for any condition, if any of the things are not given we
assume it to be 0.
Piggybacking
Piggybacking is the technique of delaying outgoing acknowledgment and attaching it to
the next data packet.
When a data frame arrives, the receiver waits and does not send the control frame
(acknowledgment) back immediately. The receiver waits until its network layer moves
to the next data packet. Acknowledgment is associated with this outgoing data frame.
Thus the acknowledgment travels along with the next data frame. This technique in
which the outgoing acknowledgment is delayed temporarily is called Piggybacking.
In this article, we will cover the overview of networking communication and mainly
focus on the concept of piggybacking in networks. And we will also discuss the
advantages and disadvantages of using piggybacking in networks. Finally, we will see
the conclusion. Let’s discuss them one by one.
Networking Communication
Sliding window algorithms are methods of flow control for network data transfer.
The data link layer uses a sender to have more than one acknowledgment packet at a
time, which improves network throughput. Both the sender and receiver maintain a
finite-size buffer to hold outgoing and incoming packets from the other side. Every
packet sent by the sender must be acknowledged by the receiver. The sender maintains
a timer for every packet sent, and any packet unacknowledged at a certain time is
resent. The sender may send a whole window of packets before receiving an
acknowledgment for the first packet in the window. This results in higher transfer rates,
as the sender may send multiple packets without waiting for each packet’s
acknowledgment. The receiver advertises a window size that tells the sender not to fill
up the receiver buffers.
How To Increase Network Efficiency?
Efficiency can also be improved by making use of the Full-duplex transmission
mode. Full Duplex transmission is a two-way directional communication
simultaneously which means that it can communicate in both directions, just like we are
using two half-duplex transmission nodes. It provides better performance than simple
transmission modes and half-duplex transmission modes.
Why Piggybacking?
Efficiency can also be improved by making use of full-duplex transmission. Full
Duplex transmission is a transmission that happens with the help of two half-duplex
transmissions which helps in communication in both directions. Full Duplex
Transmission is better than both simplex and half-duplex transmission modes.
There are two ways through which we can achieve full-duplex transmission:
1. Two Separate Channels: One way to achieve full-duplex transmission is to have
two separate channels with one for forwarding data transmission and the other for
reverse data transfer (to accept). But this will almost completely waste the bandwidth of
the reverse channel.
2. Piggybacking: A preferable solution would be to use each channel to transmit the
frame (front and back) both ways, with both channels having the same capacity.
Assume that A and B are users. Then the data frames from A to B are interconnected
with the acknowledgment from A to B. and can be identified as a data frame or
acknowledgment by checking the sort field in the header of the received frame.
One more improvement can be made. When a data frame arrives, the receiver waits and
does not send the control frame (acknowledgment) back immediately. The receiver
waits until its network layer moves to the next data packet.
Acknowledgment is associated with this outgoing data frame. Thus the
acknowledgment travels along with the next data frame.
Working of Piggybacking
As we can see in the figure, we can see with piggybacking, a single message (ACK +
DATA) over the wire in place of two separate messages. Piggybacking improves the
efficiency of the bidirectional protocols.
If Host A has both acknowledgment and data, which it wants to send, then the data
frame will be sent with the ack field which contains the sequence number of the
frame.
If Host A contains only one acknowledgment, then it will wait for some time, then
in the case, if it finds any data frame, it piggybacks the acknowledgment, otherwise,
it will send the ACK frame.
If Host A left with only a data frame, then it will add the last acknowledgment to it.
Host A can send a data frame with an ack field containing no acknowledgment bit.
Advantages of Piggybacking
1. The major advantage of piggybacking is the better use of available channel
bandwidth. This happens because an acknowledgment frame needs not to be sent
separately.
2. Usage cost reduction.
3. Improves latency of data transfer.
4. To avoid the delay and rebroadcast of frame transmission, piggybacking uses a very
short-duration timer.
Disadvantages of Piggybacking
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the acknowledgment (blocks the
ACK for some time), the frame will rebroadcast.
Conclusion
There is a dispute as to whether this is a legal or illegal activity, but piggybacking is
still a dark side of Wi-Fi. Cyber-terrorist attacks in India are a clear reminder that we
cannot control incidents occurring anywhere in the world or control unsecured Wi-Fi
networks. So it is the responsibility of the owner and administrator to secure their
wireless connection.
1. Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
3. Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
4. Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from
the IP header, and the data, padded with zero octets at the end (if necessary) to make
a multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and ICMP for
error reporting. Also UDP provides port numbers so that is can differentiate between
users requests.
Applications of UDP:
Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
UDP is widely used in online gaming, where low latency and high-speed
communication is essential for a good gaming experience. Game servers often send
small, frequent packets of data to clients, and UDP is well suited for this type of
communication as it is fast and lightweight.
Streaming media applications, such as IPTV, online radio, and video conferencing,
use UDP to transmit real-time audio and video data. The loss of some packets can be
tolerated in these applications, as the data is continuously flowing and does not
require retransmission.
VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use
UDP for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the
delay caused by packet loss or retransmission is generally not critical for this
application.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol
TFTP, RTSP, RIP.
The application layer can do some of the tasks through UDP-
Trace Route
Record Route
Timestamp
UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
Actually, UDP is a null protocol if you remove the checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.
Advantages of UDP:
1. Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
2. Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
3. Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
4. Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
5. Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
Disadvantages of UDP:
1. No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
2. No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.
3. No flow control: UDP does not have flow control, which means that it can
overwhelm the receiver with packets that it cannot handle.
4. Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks, where an
attacker can flood a network with UDP packets, overwhelming the network and causing
it to crash.
5. Limited use cases: UDP is not suitable for applications that require reliable data
delivery, such as email or file transfers, and is better suited for applications that can
tolerate some data loss, such as video streaming or online gaming.
UDP PSEUDO HEADER:
the purpose of using a pseudo-header is to verify that the UDP packet has reached its
correct destination
the correct destination consist of a specific machine and a specific protocol port
number within that machine
SERVICES
a process that generally provides and gives a common technique for each layer to
communicate with each other. Standard terminology basically required for layered
networks to request and aim for the services are provided. Service is defined as a set of
primitive operations. Services are provided by layer to each of layers above it. Below is
diagram showing relation between layers at an interface. In diagram, layers N+1, N, and
N-1 are involved and engaged in process of communication among each other.
Components Involved and their Functions :
Service Data Unit (SDU) – SDU is a piece of information or data that is generally
passed by layer just above current layer for transmission. Unit of data or information
is passed down to a lower layer from an OSI (Open System Interconnection) layer or
sublayer. Data is passed with request to transmit data. SDU basically identifies or
determines information that is been transferred among entities of peer layers that are
Working of TCP
To make sure that each message reaches its target location intact, the TCP/IP model
breaks down the data into small bundles and afterward reassembles the bundles into the
original message on the opposite end. Sending the information in little bundles of
information makes it simpler to maintain efficiency as opposed to sending everything in
one go.
After a particular message is broken down into bundles, these bundles may travel along
multiple routes if one route is jammed but the destination remains the same.
We can see that the message is being broken down, then reassembled from a different
For example, When a user requests a web page on the internet, somewhere in the world,
the server processes that request and sends back an HTML Page to that user. The server
makes use of a protocol called the HTTP Protocol. The HTTP then requests the TCP
layer to set the required connection and send the HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the Internet
Protocol (IP) layer. The packets are then sent to the destination through different routes.
The TCP layer in the user’s system waits for the transmission to get finished and
acknowledges once all packets have been received.
Features of TCP/IP
2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP or IP
that divides the bits into datagrams or packets). However, the network layer, that
provides service for the TCP, sends packets of information not streams of bytes.
Hence, TCP groups a number of bytes together into a segment and adds a header to
each of these segments and then delivers these segments to the network layer. At the
network layer, each of these segments is encapsulated in an IP packet for
transmission. The TCP header has information that is required for control purposes
which will be discussed along with the segment structure.
3. Full-duplex service –
This means that the communication can take place in both directions at the same
time.
4. Connection-oriented service –
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different
phases:
Connection establishment
Data transfer
Connection termination
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover lost or
corrupted packets by re-transmission, acknowledgement policy and timers. It uses
features like byte number and sequence number and acknowledgement number so as
to ensure reliability. Also, it uses congestion control mechanisms.
6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends
respectively as a number of logical connections can be established between port
numbers over a physical connection.
Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte
that is sent in that particular segment. It is used to reassemble the message at the
receiving end of the segments that are received out of order.
Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the
receiver expects to receive next. It is an acknowledgement for the previous bytes
being received successfully.
Control flags –
These are 6 1-bit control bits that control connection establishment, connection
termination, connection abortion, flow control, mode of transfer etc. Their function
is:
URG: Urgent pointer is valid
ACK: Acknowledgement number is valid( used in case of cumulative
acknowledgement)
PSH: Request for push
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: Terminate the connection
Window size –
This field tells the window size of the sending TCP in bytes.
Checksum –
This field holds the checksum for error control. It is mandatory in TCP as opposed
to UDP.
Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that is
urgently required that needs to reach the receiving process at the earliest. The value
of this field is added to the sequence number to get the byte number of the last
urgent byte.
A TCP CONNECTION
Connection Establishment –
1. Sender starts the process with the following:
Sequence number (Seq=521): contains the random initial sequence number
generated at the sender side.
Syn flag (Syn=1): request the receiver to synchronize its sequence number with the
above-provided sequence number.
Maximum segment size (MSS=1460 B): sender tells its maximum segment size, so
that receiver sends datagram which won’t require any fragmentation. MSS field is
present inside Option field in TCP header.
Window size (window=14600 B): sender tells about his buffer capacity in which he
has to store messages from the receiver.
2. TCP is a full-duplex protocol so both sender and receiver require a window for
receiving messages from one another.
Sequence number (Seq=2000): contains the random initial sequence number
generated at the receiver side.
Syn flag (Syn=1): request the sender to synchronize its sequence number with the
above-provided sequence number.
Maximum segment size (MSS=500 B): receiver tells its maximum segment size, so
that sender sends datagram which won’t require any fragmentation. MSS field is
present inside Option field in TCP header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS i.e., 500 B to
avoid fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
Window size (window=10000 B): receiver tells about his buffer capacity in which
he has to store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
Acknowledgement Number (Ack no.=522): Since sequence number 521 is
received by the receiver so, it makes a request for the next sequence number with
Ack no.=522 which is the next packet expected by the receiver since Syn flag
consumes 1 sequence no.
ACK flag (ACk=1): tells that the acknowledgement number field contains the next
sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
Sequence number (Seq=522): since sequence number = 521 in 1st step and SYN
flag consumes one sequence number hence, the next sequence number will be 522.
Acknowledgement Number (Ack no.=2001): since the sender is acknowledging
SYN=1 packet from the receiver with sequence number 2000 so, the next sequence
number expected is 2001.
ACK flag (ACK=1): tells that the acknowledgement number field contains the next
sequence expected by the sender.
FLOW CONTROL
Flow control is actually set of procedures that explains sender about how much data or
frames it can transfer or transmit before data overwhelms receiver. The receiving device
also contains only limited amount of speed and memory to store data. This is why
receiving device should be able to tell or inform the sender about stopping the
transmission or transferring of data on temporary basis before it reaches limit. It also
needs buffer, large block of memory for just storing data or frames until they are
processed.
flow control can also be understand as a speed matching mechanism for two stations.
1. Stop-and-Wait Flow Control : This method is the easiest and simplest form of flow
control. In this method, basically message or data is broken down into various multiple
frames, and then receiver indicates its readiness to receive frame of data. When
acknowledgement is received, then only sender will send or transfer the next frame.
This process is continued until sender transmits EOT (End of Transmission) frame. In
this method, only one of frames can be in transmission at a time. It leads to inefficiency
i.e. less productivity if propagation delay is very much longer than the transmission
delay and Ultimately In this method sender sent single frame and receiver take one
frame at a time and sent acknowledgement(which is next frame number only) for new
frame.
Advantages –
This method is very easiest and simple and each of the frames is checked and
acknowledged well.
This method is also very accurate.
Disadvantages –
This method is fairly slow.
In this, only one packet or frame can be sent at a time.
It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control : This method is required where reliable in-order
delivery of packets or frames is very much needed like in data link layer. It is point to
point protocol that assumes that none of the other entity tries to communicate until
current data or frame transfer gets completed. In this method, sender transmits or sends
various frames or packets before receiving any acknowledgement. In this method, both
the sender and receiver agree upon total number of data frames after which
acknowledgement is needed to be transmitted. Data Link Layer requires and uses this
method that simply allows sender to have more than one unacknowledged packet “in-
flight” at a time. This increases and improves network throughput. and Ultimately In
this method sender sent multiple frame but receiver take one by one and after
completing one frame acknowledge(which is next frame number only) for new frame.
Advantages –
It performs much better than stop-and-wait flow control.
This method increases efficiency.
Multiples frames can be sent one after another.
Disadvantages –
The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
The receiver might receive data frames or packets out the sequence.
ERROR CONTROL
Ways of doing Error Control : There are basically two ways of doing Error control as
given below :
1. Error Detection : Error detection, as the name suggests, simply means detection or
identification of errors. These errors may occur due to noise or any other
impairments during transmission from transmitter to the receiver, in communication
system. It is a class of techniques for detecting garbled i.e. unclear and distorted data
or messages.
2. Error Correction : Error correction, as the name suggests, simply means correction
or solving or fixing of errors. It simply means reconstruction and rehabilitation of
original data that is error-free. But error correction method is very costly and very
hard.
Various Techniques for Flow Control : There are various techniques of error control
as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as alternating bit
protocol. It is one of the simplest flow and error control techniques or mechanisms.
This mechanism is generally required in telecommunications to transmit data or
information between two connected devices. Receiver simply indicates its readiness to
receive data for each frame. In these, sender sends information or data packets to
receiver. Sender then stops and waits for ACK (Acknowledgment) from receiver.
Further, if ACK does not arrive within given time period i.e., time-out, sender then
again resends frame and waits for ACK. But, if sender receives ACK, then it will
transmit the next data packet to receiver and then again wait for ACK from receiver.
This process to stop and wait continues until sender has no data frame or packet to
send.
2. Sliding Window ARQ : This technique is generally used for continuous
transmission error control. It is further categorized into two categories as given below :
Go-Back-N ARQ : Go-Back-N ARQ is form of ARQ protocol in which
transmission process continues to send or transmit total number of frames that are
specified by window size even without receiving an ACK (Acknowledgement)
packet from the receiver. It uses sliding window flow control protocol. If no errors
occur, then operation is identical to sliding window.
Selective Repeat ARQ : Selective Repeat ARQ is also form of ARQ protocol in
which only suspected or damaged or lost data frames are only retransmitted. This
technique is similar to Go-Back-N ARQ though much more efficient than the Go-
Back-N ARQ technique due to reason that it reduces number of retransmission. In
this, the sender only retransmits frames for which NAK is received. But this
technique is used less because of more complexity between sender and receiver and
each frame must be needed to be acknowledged individually.
The main difference between Go Back ARQ and Selective Repeat ARQ is that in Go
Back ARQ, the sender has to retransmit the whole window of frame again if any of the
frame is lost but in Selective Repeat ARQ only the data frame that is lost is
retransmitted.
Congestion Control
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows
down network response time.
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water additional
water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.
In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In
figure (B) We see that three of the five packets have gotten through, but the other two
are stuck waiting for more tokens to be generated.
Ways in which token bucket is superior to leaky bucket: The leaky bucket algorithm
controls the rate at which the packets are introduced in the network, but it is very
conservative in nature. Some flexibility is introduced in the token bucket algorithm. In
the token bucket, algorithm tokens are generated at each tick (up to a certain limit). For
an incoming packet to be transmitted, it must capture a token and the transmission takes
place at the same rate. Hence some of the busty packets are transmitted at the same rate
if tokens are available and thus introduces some amount of flexibility in the system.