Unit 3 CN
Unit 3 CN
Unit III:
Network Layer: Network Layer Protocols: IPv4, IPv6, ARP, ICMP. Routing
Algorithms –Shortest path Algorithm. Congestion Control Algorithms: Leaky
bucket algorithm. Congestion prevention Policies.
Network Layer
o Routing: When a packet reaches the router's input link, the router will move the
packets to the router's output link. For example, a packet from S1 to R1 must be
forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical addressing and
network layer implements the logical addressing. Logical addressing is also used to
distinguish between source and destination system. The network layer adds a header
to the packet which includes the logical addresses of both the sender and the receiver.
o Internetworking: This is the main role of the network layer that it provides the
logical connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets into the
smallest individual data units that travel through different networ
o Guaranteed delivery: This layer provides the service which guarantees that the
packet will arrive at its destination.
o Guaranteed delivery with bounded delay: This service guarantees that the packet
will be delivered within a specified host-to-host delay bound.
o In-Order packets: This service ensures that the packet arrives at the destination in
the order in which they are sent.
o Guaranteed max jitter: This service ensures that the amount of time taken between
two successive transmissions at the sender is equal to the time between their receipt at
the destination.
o Security services: The network layer provides security by using a session key
between the source and destination host. The network layer in the source host
encrypts the payloads of datagrams being sent to the destination host. The network
layer in the destination host would then decrypt the payload. In such a way, the
network layer maintains the data integrity and source authentication services.
If the host wants to know the physical address of another host on its network, then it sends an
ARP query packet that includes the IP address and broadcast it over the network. Every host
on the network receives and processes the ARP packet, but only the intended recipient
recognizes the IP address and sends back the physical address. The host holding the datagram
adds the physical address to the cache memory and to the datagram header, then sends back
to the sender.
RARP
o RARP stands for Reverse Address Resolution Protocol.
o If the host wants to know its IP address, then it broadcast the RARP query packet that
contains its physical address to the entire network. A RARP server on the network
recognizes the RARP packet and responds back with the host IP address.
o The protocol which is used to obtain the IP address from a server is known
as Reverse Address Resolution Protocol.
o The message format of the RARP protocol is similar to the ARP protocol.
o Like ARP frame, RARP frame is sent from one machine to another encapsulated in
the data portion of a frame.
ICMP
o ICMP stands for Internet Control Message Protocol.
o The ICMP is a network layer protocol used by hosts and routers to send the
notifications of IP datagram problems back to the sender.
o ICMP uses echo test/reply to check whether the destination is reachable and
responding.
o ICMP handles both control and error messages, but its main function is to report the
error but not to correct them.
o An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate
routers.
o ICMP protocol communicates the error messages to the sender. ICMP messages cause
the errors to be returned back to the user processes.
o ICMP messages are transmitted within IP datagram.
Error Reporting
o Destination unreachable
o Source Quench
o Time Exceeded
o Parameter problems
o Redirection
There are two ways when Time Exceeded message can be generated:
Sometimes packet discarded due to some bad routing implementation, and this causes the
looping issue and network congestion. Due to the looping issue, the value of TTL keeps on
decrementing, and when it reaches zero, the router discards the datagram. However, when the
datagram is discarded by the router, the time exceeded message will be sent by the router to
the source host.
When destination host does not receive all the fragments in a certain time limit, then the
received fragments are also discarded, and the destination host sends time Exceeded message
to the source host.
o Parameter problems: When a router or host discovers any missing value in the IP
datagram, the router discards the datagram, and the "parameter problem" message is
sent back to the source host.
o Redirection: Redirection message is generated when host consists of a small routing
table. When the host consists of a limited number of entries due to which it sends the
datagram to a wrong router. The router that receives a datagram will forward a
datagram to a correct router and also sends the "Redirection message" to the host to
update its routing table.
IGMP
o IGMP stands for Internet Group Message Protocol.
o The IP protocol supports two types of communication:
o Unicasting: It is a communication between one sender and one receiver.
Therefore, we can say that it is one-to-one communication.
o Multicasting: Sometimes the sender wants to send the same message to a
large number of receivers simultaneously. This process is known as
multicasting which has one-to-many communication.
o The IGMP protocol is used by the hosts and router to support multicasting.
o The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that
are the members of a group.
Where,
Type: It determines the type of IGMP message. There are three types of IGMP message:
Membership Query, Membership Report and Leave Report.
Maximum Response Time: This field is used only by the Membership Query message. It
determines the maximum time the host can send the Membership Report message in response
to the Membership Query message.
Checksum: It determines the entire payload of the IP datagram in which IGMP message is
encapsulated.
Group Address: The behavior of this field depends on the type of the message sent.
o For Membership Query, the group address is set to zero for General Query and set
to multicast group address for a specific query.
o For Membership Report, the group address is set to the multicast group address.
o For Leave Group, it is set to the multicast group address.
IGMP Messages
o Membership report messages can also be generated by the host when a host
wants to join the multicast group without waiting for a membership query
message from the router.
o Membership report messages are received by a router as well as all the hosts
on an attached interface.
o Each membership report message includes the multicast address of a single
group that the host wants to join.
o IGMP protocol does not care which host has joined the group or how many
hosts are present in a single group. It only cares whether one or more attached
hosts belong to a single multicast group.
o The membership Query message sent by a router also includes a "Maximum
Response time". After receiving a membership query message and before
sending the membership report message, the host waits for the random amount
of time from 0 to the maximum response time. If a host observes that some
other attached host has sent the "Maximum Report message", then it discards
its "Maximum Report message" as it knows that the attached router already
knows that one or more hosts have joined a single multicast group. This
process is known as feedback suppression. It provides the performance
optimization, thus avoiding the unnecessary transmission of a "Membership
Report message".
o Leave Report
When the host does not send the "Membership Report message", it means that the
host has left the group. The host knows that there are no members in the group, so
even when it receives the next query, it would not report the group.
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4
(Transport) and divides it into packets. IP packet encapsulates data unit received from above
layer and add to its own header information
The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.
IP header includes many relevant information including Version Number, which, in this
context, is 4.
● Fragment Offset − This offset tells the exact position of the fragment in the original
IP Packet.
● Time to Live − To avoid looping in the network, every packet is sent with some
TTL value set, which tells the network how many routers (hops) this packet can
cross. At each hop, its value is decremented by one and when the value reaches zero,
the packet is discarded.
● Protocol − Tells the Network layer at the destination host, to which Protocol this
packet belongs to, i.e. the next level Protocol. For example protocol number of ICMP
is 1, TCP is 6 and UDP is 17.
● Header Checksum − This field is used to keep checksum value of entire header
which is then used to check if the packet is received error-free.
● Source Address − 32-bit address of the Sender (or source) of the packet.
● Destination Address − 32-bit address of the Receiver (or destination) of the packet.
● Options − This is optional field, which is used if the value of IHL is greater than 5.
These options may contain values for options such as Security, Record Route, Time
Stamp, etc.
IPv4 - Addressing
In this mode, data is sent only to one destined host. The Destination Address field contains
32- bit IP address of the destination host. Here the client sends data to the targeted server −
In this mode, the packet is addressed to all the hosts in a network segment. The Destination
Address field contains a special broadcast address, i.e. 255.255.255.255. When a host sees
this packet on the network, it is bound to process it. Here the client sends a packet, which is
entertained by all the Servers −
This mode is a mix of the previous two modes, i.e. the packet sent is neither destined to a
single host nor all the hosts on the segment. In this packet, the Destination Address contains
a special address which starts with 224.x.x.x and can be entertained by more than one host.
Here a server sends packets which are entertained by more than one servers. Every network
has one IP address reserved for the Network Number which represents the network and one
IP address reserved for the Broadcast Address, which represents all the hosts in that network
IPV6
Internet Protocol version 6, is a new addressing protocol designed to incorporate whole sort
of requirement of future internet known to us as Internet version 2. This protocol as its
predecessor IPv4, works on Network Layer (Layer-3). Along with its offering of enormous
amount of logical address space, this protocol has ample of features which addresses today’s
shortcoming of IPv4.
So far, IPv4 has proven itself as a robust routable addressing protocol and has served human
being for decades on its best-effort-delivery mechanism. It was designed in early 80’s and
did not get any major change afterward. At the time of its birth, Internet was limited only to
a few Universities for their research and to Department of Defense. IPv4 is 32 bits long
which offers around 4,294,967,296 (232) addresses. This address space was considered more
than enough that time. Given below are major points which played key role in birth of IPv6:
● Internet has grown exponentially and the address space allowed by IPv4 is
saturating. There is a requirement of protocol which can satisfy the need of future
Internet addresses which are expected to grow in an unexpected manner.
● Using features such as NAT, has made the Internet discontiguous i.e. one part which
belongs to intranet, primarily uses private IP addresses; which has to go through
number of mechanism to reach the other part, the Internet, which is on public IP
addresses.
● IPv4 on its own does not provide any security feature which is vulnerable as data on
Internet, which is a public domain, is never safe. Data has to be encrypted with some
other security application before being sent on Internet.
● Data prioritization in IPv4 is not up to date. Though IPv4 has few bits reserved for
Type of Service or Quality of Service, but they do not provide much functionality.
● IPv4 enabled clients can be configured manually or they need some address
configuration mechanism. There exists no technique which can configure a device to
have globally unique IP address.
Features
The successor of IPv4 is not designed to be backward compatible. Trying to keep the basic
functionalities of IP addressing, IPv6 is redesigned entirely. It offers the following features:
In contrast to IPv4, IPv6 uses 4 times more bits to address a device on the Internet.
This much of extra bits can provide approximately 3.4×10 38 different combinations
of addresses. This address can accumulate the aggressive requirement of address
allotment for almost everything in this world. According to an estimate, 1564
addresses can be allocated to every square meter of this earth.
● Simplified Header:
IPv6’s header has been simplified by moving all unnecessary information and
options (which are present in IPv4 header) to the end of the IPv6 header. IPv6 header
is only twice as bigger than IPv4 providing the fact the IPv6 address is four times
longer.
● End-to-end Connectivity:
Every system now has unique IP address and can traverse through the internet
without using NAT or other translating components. After IPv6 is fully
implemented, every host can directly reach other host on the Internet, with some
limitations involved like Firewall, Organization’s policies, etc.
● Auto-configuration:
IPv6 supports both stateful and stateless auto configuration mode of its host devices.
This way absence of a DHCP server does not put halt on inter segment
communication.
● Faster Forwarding/Routing:
Simplified header puts all unnecessary information at the end of the header. All
information in first part of the header are adequate for a Router to take routing
decision thus making routing decision as quickly as looking at the mandatory header.
● IPSec:
Initially it was decided for IPv6 to must have IPSec security, making it more secure
than IPv4. This feature has now been made optional.
● No Broadcast:
● Anycast Support:
This is another characteristic of IPv6. IPv6 has introduced Anycast mode of packet
routing. In this mode, multiple interfaces over the Internet are assigned same
Anycast IP address. Routers, while routing, sends the packet to the nearest
destination.
● Mobility:
IPv6 was designed keeping mobility feature in mind. This feature enables hosts (such
as mobile phone) to roam around in different geographical area and remain
connected with same IP address. IPv6 mobility feature takes advantage of auto IP
configuration and Extension headers.
Where IPv4 used 6 bits DSCP (Differential Service Code Point) and 2 bits ECN
(Explicit Congestion Notification) to provide Quality of Service but it could only be
used if the end-to-end devices support it, that is, the source and destination device
and underlying network must support it.
In IPv6, Traffic class and Flow label are used to tell underlying routers how to
efficiently process the packet and route it.
● Smooth Transition:
Large IP address scheme in IPv6 enables to allocate devices with globally unique IP
addresses. This assures that mechanism to save IP addresses such as NAT is not
required. So devices can send/receive data between each other, for example VoIP
and/or any streaming media can be used much efficiently.
Other fact is, the header is less loaded so routers can make forwarding decision and
forward them as quickly as they arrive.
● Extensibility:
One of the major advantage of IPv6 header is that it is extensible to add more
information in the option part. IPv4 provides only 40-bytes for options whereas
options in IPv6 can be as much as the size of IPv6 packet itself.
Addressing Modes
In computer networking, addressing mode refers to the mechanism how we address a host
on the network. IPv6 offers several types of modes by which a single host can be addressed,
more than one host can be addressed at once or the host at closest distance can be addressed.
Unicast
Multicast
The IPv6 multicast mode is same as that of IPv4. The packet destined to multiple hosts is
sent on a special multicast address. All hosts interested in that multicast information, need to
join that multicast group first. All interfaces which have joined the group receive the
multicast packet and process it, while other hosts not interested in multicast packets ignore
the multicast information.
Anycast
IPv6 has introduced a new type of addressing, which is called Anycast addressing. In this
addressing mode, multiple interfaces (hosts) are assigned same Anycast IP address. When a
host wishes to communicate with a host equipped with an Anycast IP address, sends a
Unicast message. With the help of complex routing mechanism, that Unicast message is
delivered to the host closest to the Sender, in terms of Routing cost.
Headers
The wonder of IPv6 lies in its header. IPv6 address is 4 times larger than IPv4 but the IPv6
header is only 2 times larger than that of IPv4. IPv6 headers have one Fixed Header and zero
or more Optional (Extension) Headers. All necessary information which is essential for a
router is kept in Fixed Header. Extension Header contains optional information which helps
routers to understand how to handle a packet/flow.
Fixed Header
IPv6 fixed header is 40 bytes long and contains the following information.
1 Version (4-bits): This represents the version of Internet Protocol, i.e. 0110.
2 Traffic Class (8-bits): These 8 bits are divided into two parts. Most significant 6
bits are used for Type of Service, which tells the Router what services should be
provided to this packet. Least significant 2 bits are used for Explicit Congestion
Notification (ECN).
3 Flow Label (20-bits): This label is used to maintain the sequential flow of the
packets belonging to a communication. The source labels the sequence which
helps the router to identify that this packet belongs to a specific flow of
information. This field helps to avoid re-ordering of data packets. It is designed for
streaming/real-time media.
4 Payload Length (16-bits): This field is used to tell the routers how much
information this packet contains in its payload. Payload is composed of Extension
Headers and Upper Layer data. With 16 bits, up to 65535 bytes can be indicated
but if Extension Headers contain Hop-by-Hop Extension Header than payload may
exceed 65535 bytes and this field is set to 0.
5 Next Header (8-bits): This field is used to indicate either the type of Extension
Header, or if Extension Header is not present then it indicates the Upper Layer
PDU. The values for the type of Upper Layer PDU is same as IPv4’s.
6 Hop Limit (8-bits): This field is used to stop packet to loop in the network
infinitely. This is same as TTL in IPv4. The value of Hop Limit field is
Extension Headers
In IPv6, the Fixed Header contains only information which is necessary and avoiding
information which is either not required or is rarely used. All such information, is put
between the Fixed Header and Upper layer header in the form of Extension Headers. Each
Extension Header is identified by a distinct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points to the first
Extension Header. If there is one more Extension Header, then first Extension Header’s
‘Next-Header’ field point to the second one, and so on. The last Extension Header’s ‘Next-
Header’ field point to Upper Layer Header. Thus all headers from point to the next one in a
linked list manner.
If the Next Header field contains value 59, it indicates that there’s no header after this
header, not even Upper Layer Header.
These headers:
Extension Headers are arranged one after another in a Linked list manner, as depicted in the
diagram below:
Routing algorithm
o In order to transfer the packets from source to the destination, the network layer must
determine the best route through which packets can be transmitted.
o Whether the network layer provides datagram service or virtual circuit service, the
main job of the network layer is to provide the best route. The routing protocol
provides this job.
o The routing protocol is a routing algorithm that provides the best path from the source
to the destination. The best path is the path that has the "least-cost path" from source
to the destination.
o Routing is the process of forwarding the packets from source to the destination but the
best route to send the packets is determined by the routing algorithm.
o Knowledge about the whole network: Each router shares its knowledge through the
entire network. The Router sends its collected knowledge about the network to its
neighbors.
o Routing only to neighbors: The router sends its knowledge about the network to
only those routers which have direct links. The router sends whatever it has about the
network through the ports. The information is received by the router and uses the
information to update its own routing table.
o Information sharing at regular intervals: Within 30 seconds, the router sends the
information to the neighboring routers.
Algorithm
Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.
The three keys to understand the Link State Routing algorithm:
o Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.
o Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives
the packet sends the copies to all its neighbors. Finally, each and every router receives
a copy of the same information.
o Information sharing: A router sends the information to every other router only when
the change occurs in the information.
Reliable Flooding
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.
o The Link state routing algorithm is also known as Dijkstra's algorithm which is used
to find the shortest path from one node to every other node in the network.
o The Dijkstra's algorithm is an iterative, and it has the property that after k th iteration of
the algorithm, the least cost paths are well known for k destination nodes.
Algorithm
Disadvantage:
Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite
looping, this problem can be solved by using Time-to-leave field
Shortest Path algorithm in Computer Network
Consider that a network comprises of N vertices (nodes or network devices) that are
connected by M edges (transmission lines). Each edge is associated with a weight,
representing the physical distance or the transmission delay of the transmission line. The
target of shortest path algorithms is to find a route between any pair of vertices along the
edges, so the sum of weights of edges is minimum. If the edges are of equal weights, the
shortest path algorithm aims to find a route having minimum number of hops.
Common Shortest Path Algorithms
Some common shortest path algorithms are −
● Dijkstra’s Algorithm
● Calculate the shortest distances iteratively. Repeat |V|- 1 times for each node except s
−
o Repeat for each edge connecting vertices u and v −
● The array dist[] contains the shortest path from s to every other node.
Dijkstra’s Algorithm
Input − A graph representing the network; and a source node, s
Output − A shortest path tree, spt[], with s as the root node.
Initializations −
● An array, Q, containing all nodes in the graph. When the algorithm runs into
completion, Q will become empty.
● An empty set, S, to which the visited nodes will be added. When the algorithm runs
into completion, S will contain all the nodes in the graph.
o Remove from Q, the node, u having the smallest dist[u] and which is not in S.
In the first run, dist[s] is removed.
o Add u to S, marking u as visited.
o For each node v which is adjacent to u, update dist[v] as −
● The array dist[] contains the shortest path from s to every other node.
● Repeat for k = 1 to N −
o Repeat for i = 1 to N −
▪ Repeat for j = 1 to N −
● The matrix cost[][] contains the shortest cost from each node, i , to every other
node, j.
What is congestion?
A state occurs in the network layer when the message traffic is so heavy that it slows down
network response time.
In the network layer, before the network can make Quality of service guarantees, it must
know what traffic is being guaranteed. One of the main causes of congestion is that traffic is
often bursty.
To understand this concept first we have to know little about traffic shaping. Traffic
Shaping is a mechanism to control the amount and the rate of the traffic sent to the network.
Approach of congestion management is called Traffic shaping. Traffic shaping helps to
regulate rate of data transmission and reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water in a random order but we have to
get water in a fixed rate, for this we will make a hole at the bottom of the bucket. It will
ensure that water coming out is in a some fixed rate, and also if bucket will full we will stop
pouring in it.
The input rate can vary, but the output rate remains constant. Similarly, in networking, a
technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the
bucket and sent out at an average rate.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of
data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in 10 s. The leaky bucket smooths the
traffic by sending out data at a rate of 3 Mbps during the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by consuming more
bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds
the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the
process removes a fixed number of packets from the queue at each tick of the clock. If the
traffic consists of variable-length packets, the fixed output rate must be based on the number
of bytes or bits.
The following is an algorithm for variable-length packets:
● Step 3 − If the packet is ready, then a token is removed from the bucket, and the
packet is sent.
● Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.
Example
Let us understand the Token Bucket Algorithm with an example −
in figure (a) the bucket holds two tokens, and three packets are waiting to be sent out of the
interface.
In Figure (b) two packets have been sent out by consuming two tokens, and 1 packet is still
left.
When compared to Leaky bucket the token bucket algorithm is less restrictive that means it
allows more traffic. The limit of busyness is restricted by the number of tokens available in
the bucket at a particular instant of time.
The implementation of the token bucket algorithm is easy − a variable is used to count the
tokens. For every t seconds the counter is incremented and then it is decremented whenever a
packet is sent. When the counter reaches zero, no further packet is sent out.
This is shown in below given diagram −
When the host has to send a packet , packet is In this leaky bucket holds tokens generated at
thrown in bucket. regular intervals of time.
Bursty traffic is converted into uniform If there is a ready packet , a token is removed
traffic by leaky bucket. from Bucket and packet is send.
In practice bucket is a finite queue outputs at If there is a no token in bucket, packet can
finite rate not be send.
● If bucket is full in token Bucket , tokens are discard not packets. While in leaky bucket,
packets are discarded.
● Token Bucket can send Large bursts at a faster rate while leaky bucket always sends
packets at constant rate.
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several packets
in the Go-back-n window are re-sent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive packages
and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion. Several
approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only
if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before transmitting it
further. If there is a chance of a congestion or there is a congestion in the network, router
should deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested and
reject receiving data from above nodes. Backpressure is a node-to-node congestion
control technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its
above upstream node.
1. In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and inform the source to slow down.
1. implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
2. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to
the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
● Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.
● Backward Signaling : In backward signaling, a signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs to
slow down.