0% found this document useful (0 votes)
11 views38 pages

Unit Ii

The document discusses the data link layer and its responsibilities in computer networks. The data link layer is divided into the medium access control sublayer and logical link control sublayer. It is responsible for framing data, error detection, and providing different types of services like connection-oriented and connectionless. Common error detection techniques used are parity checks, checksums, and cyclic redundancy checks.

Uploaded by

Mohd Abdul Aleem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
11 views38 pages

Unit Ii

The document discusses the data link layer and its responsibilities in computer networks. The data link layer is divided into the medium access control sublayer and logical link control sublayer. It is responsible for framing data, error detection, and providing different types of services like connection-oriented and connectionless. Common error detection techniques used are parity checks, checksums, and cyclic redundancy checks.

Uploaded by

Mohd Abdul Aleem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 38

Computer Networks

UNIT-II
Data Link Layer and Medium Access Sub Layer
The data link layer is an interface between the network and physical layer. It is
further subdivided into two modules:
Medium Access Control (MAC)
Logical Link Control (LLC).
The MAC module plays a critical role in conserving network life by efficiently
allocating medium access to the contending nodes.
The LLC is on top of the MAC layer and is responsible for cyclic redundancy
check (CRC), sequencing information, and addition of appropriate source and
destination information.
The data link layer is also responsible for the multiplexing of data streams and data
frame detection. So, with the preceding in mind: first create a network
infrastructure, which includes establishing communication links between possibly
thousands of nodes, and provides the network self-organizing capabilities. Second,
the data link layer can fairly and efficiently share communication resources
between all the nodes.

This layer has as a primary responsibility to provide error-free transmission of data


information between two remote hosts (computers) attached to the same physical
cable. At the source machine it receives the data from the Networks Layer, groups
them into frames and from there are sent to the destination machine. From that point
the data are received from the Data Layer at the destination, a checksum is computed
there to make sure that the frames sent are identical with those received and
eventually the data are passed to the Network Layer.

Although the actual transmission is end-to-end, it is easier to think in terms of the two
Data Link Layers processing communication using a data link protocol via a virtual
data path.. The actual data path follows the route (Source machine: Network Layer -
Data Link Layer - Physical Layer - cable --> Destination: cable - Physical Layer -
Data Link Layer - Network Layer) as shown in figure below.

There are three basic services that Data Link Layer commonly provides:

i. Unacknowledged connectionless service.


ii. Acknowledged connectionless service.
iii. Acknowledged connection oriented service

In the first case frames are sent independently to destination without the destination
machine acknowledge them. In case of a frame is lost no attempt is made by the Data
Link Layer to recover it. Unacknowledged connectionless service is useful when the
error rate is very small and the recovery of the frame is made by higher layers in the
Network hierarchy. Also LANs find this service appropriate for real-time traffic such
as speech, in which late data are worse than bad data. Maybe you have personally
experienced this case, where delay of data occurs in a computer to computer
conversation. Imagine maintaining a computer to computer conversation with an
another person. It would be much better if the data were sent and received on but a bit
distorted instead of data received after 2sec delay and in better quality.

The second case is a more reliable service in which every single frame, as soon as it
arrives to destination machine is individually acknowledged. In this way the sender
knows whether or not the frame arrived safely to the destination. Acknowledged
connectionless service is useful in unreliable channels such as wireless systems.

Finally we have acknowledged connection oriented service. The source and


destination machines establish a connection before any data are transferred. Each
frame sent is number, as if it has a specific "ID", and the data link layer guarantees
that each frame sent is indeed received by the other end exactly once and in the right
order. This service is said to be the most sophisticated service the data link layer can
provide to Network layer.

Framing
Framing is a technique performed by the Data Link layer. In the source machine Data
link layer receives a bit stream of data from the network layer. It breaks the bit stream
into discrete frames and computes a checksum, Then the frame is sent to the
destination machine where the checksum is recomputed. In case were it is different
from the one contained in the frame an error has occurred and data link layer discards
it and sends an error report.

There are many methods of breaking a bit stream into frames but I will like to
concentrate in only two of them. This procedure might appear easy but instead is a
very delicate method as there is difficulty by the receiving end to distinguish among
the frames that were sent.

The first method is called character stuffing. There is a specific sequence of


characters representing the start and the end of each frame. Start is represented with
DLE STX and the end with DLE ETX. (DLE stands for dada link escape, STX start of
text and ETX end of text). So in case the destination loses track of the frame
boundaries, it looks for the this sequence of characters to figure out where it is. The
problem with this approach is that these bit pattern might occur within the data
sequence. In order to overcome this problem the sender's data link layer inserts an
ASCII DLE character just before each "accidental" DLE character in the data. In the
receiving machine, data link layer removes this stuffed character and passes the
original data to the network layer. The technique is shown graphically in the figure
below.
The first sequence shows the original data sent by the network layer to data link
layer. Case (b) shows the data after being stuffed and case (c) are the data
passed to the network layer on the receiving machine.

Another technique used for framing is called bit stuffing. It is analogous to character
stuffing but instead of ASCII characters it adds bits to a bit stream of data. The
beginning and end of a frame contains a special pattern of 01111110 called a flag
byte. Therefore, if the actual data being transmitted has six 1's in a row, a zero is
inserted after the first 5 1's so that the data is not interpreted as a frame delimiter. On
the receiving end, the stuffed bits are discarded, in the same way as in character
stuffing technique explained before, and passed to the network layer. A demonstration
of this technique can be shown in the diagram below below:

(a) Is the original bit stream

(b) Shows the data after being stuffed in the source's machine data link layer.
Whenever it counters five consecutive ones in the data, it automatically stuffs a 0 bit
into the stream.

(c) The after destuffing by the receiver's data link layer.

Error Detection and Correction


Data-link layer uses error control techniques to ensure that frames, i.e. bit streams
of data, are transmitted from the source to the destination with a certain extent of
accuracy.
Errors
When bits are transmitted over the computer network, they are subject to get
corrupted due to interference and network problems. The corrupted bits leads to
spurious data being received by the destination and are called errors.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst
errors.
 Single bit error − In the received frame, only one bit has been corrupted,
i.e. either changed from 0 to 1 or from 1 to 0.

 Multiple bits error − In the received frame, more than one bits are
corrupted.
 Burst error − In the received frame, more than one consecutive bits are
corrupted.

Error Control
Error control can be done in two ways
 Error detection − Error detection involves checking whether any error has
occurred or not. The number of error bits and the type of error does not
matter.
 Error correction − Error correction involves ascertaining the exact number
of bits that has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some
additional bits along with the data bits. The receiver performs necessary checks
based upon the additional redundant bits. If it finds that the data is free from errors,
it removes the redundant bits before passing the message to the upper layers.
Error Detection Techniques
There are three main techniques for detecting errors in frames: Parity Check,
Checksum and Cyclic Redundancy Check (CRC).
Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to
make a number of 1s either even in case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity
bit in the following way
 In case of even parity: If a number of 1s is even then parity bit value is 0. If
the number of 1s is odd then parity bit value is 1.
 In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a
number of 1s is even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even
parity check, if the count of 1s is even, the frame is accepted, otherwise, it is
rejected. A similar rule is adopted for odd parity check.
The parity check is suitable for single bit error detection only.
Checksum
In this error detection scheme, the following procedure is applied
 Data is divided into fixed sized frames or segments.
 The sender adds the segments using 1’s complement arithmetic to get the
sum. It then complements the sum to get the checksum and sends it along
with the data frames.
 The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.
 If the result is zero, the received frames are accepted; otherwise, they are
discarded.
Cyclic Redundancy Check (CRC)
Cyclic Redundancy Check (CRC) involves binary division of the data bits being
sent by a predetermined divisor agreed upon by the communicating system. The
divisor is generated using polynomials.
 Here, the sender performs binary division of the data segment by the divisor.
It then appends the remainder called CRC bits to the end of the data
segment. This makes the resulting data unit exactly divisible by the divisor.
 The receiver divides the incoming data unit by the divisor. If there is no
remainder, the data unit is assumed to be correct and is accepted. Otherwise,
it is understood that the data is corrupted and is therefore rejected.
Error Correction Techniques
Error correction techniques find out the exact number of bits that have been
corrupted and as well as their locations. There are two principle ways

 Backward Error Correction (Retransmission) − If the receiver detects an


error in the incoming frame, it requests the sender to retransmit the frame. It
is a relatively simple technique. But it can be efficiently used only where
retransmitting is not expensive as in fiber optics and the time for
retransmission is low relative to the requirements of the application.
 Forward Error Correction − If the receiver detects some error in the
incoming frame, it executes error-correcting code that generates the actual
frame. This saves bandwidth required for retransmission. It is inevitable in
real-time systems. However, if there are too many errors, the frames need to
be retransmitted.
The four main error correction codes are

 Hamming Codes
 Binary Convolution Code
 Reed – Solomon Code
 Low-Density Parity-Check Code

Hamming Code
Hamming code is a set of error-correction codes that can be used to detect and correct
the errors that can occur when the data is moved or stored from the sender to the
receiver. It is technique developed by R.W. Hamming for error correction.
Redundant bits –

Redundant bits are extra binary bits that are generated and added to the information-
carrying bits of data transfer to ensure that no bits were lost during the data transfer.
The number of redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using:
= 2^4 ≥ 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s
in the data is even or odd. Parity bits are used for error detection. There are two types of
parity bits:
1. Even parity bit:
In the case of even parity, for a given set of bits, the number of 1’s are counted. If that
count is odd, the parity bit value is set to 1, making the total count of occurrences of
1’s an even number. If the total number of 1’s in a given set of bits is already even,
the parity bit’s value is 0.
2. Odd Parity bit –
In the case of odd parity, for a given set of bits, the number of 1’s are counted. If that
count is even, the parity bit value is set to 1, making the total count of occurrences of
1’s an odd number. If the total number of 1’s in a given set of bits is already odd, the
parity bit’s value is 0.

General Algorithm of Hamming code –

The Hamming Code is simply the use of extra parity bits to allow the identification of an
error.
1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
3. All the other bit positions are marked as data bits.
4. Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form.
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in
the least significant position (1, 3, 5, 7, 9, 11, etc).

b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in
the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).

c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from the least significant bit (4–7, 12–15, 20–23, etc).

d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in
the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).

e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is non-zero.
5. Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd.
6. Set a parity bit to 0 if the total number of ones in the positions it checks is even.
Determining the position of redundant bits –

These redundancy bits are placed at the positions which correspond to the power of 2.
As in the above example:

1. The number of data bits = 7


2. The number of redundant bits = 4
3. The total number of bits = 11
4. The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8

Suppose the data to be transmitted is 1011001, the bits will be placed as follows:

Determining the Parity bits –


1. R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position.
R1: bits 1, 3, 5, 7, 9, 11
To find the redundant bit R1, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R1 is an even number the value of R1 (parity
bit’s value) = 0
2. R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11

To find the redundant bit R2, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R2 is odd the value of R2 (parity bit’s
value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7

To find the redundant bit R4, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) =
1
4. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit.
R8: bit 8,9,10,11

To find the redundant bit R8, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R8 is an even number the value of R8(parity
bit’s value)=0.

Thus, the data transferred is:

Error detection and correction –


Suppose in the above example the 6th bit is changed from 0 to 1 during data
transmission, then it gives new parity values in the binary number:


The bits give the binary number as 0110 whose decimal representation is 6. Thus, the bit
6 contains an error. To correct the error the 6th bit is changed from 1 to 0.
Block Coding
In block coding, we divide our message into blocks, each of k bits, called data
words. We add r redundant bits to each block to make the length n = k + r. The
resulting n-bit blocks are called code words.
For example, we have a set of data words, each of size k, and a set of code words,
each of size of n. With k bits, we can create a combination of 2k data words, with n
bits; we can create a combination of 2n code words. Since n > k, the number of
possible code words is larger than the number of possible data words.

The block coding process is one-to-one; the same data word is always encoded as
the same code word. This means that we have 2n-2k code words that are not used.
We call these code words invalid or illegal. The following figure shows the
situation.

Error Detection
If the following two conditions are met, the receiver can detect a change in the
original code word by using Block coding technique.

1. The receiver has (or can find) a list of valid code words.

2. The original code word has changed to an invalid one.

The sender creates code words out of data words by using a generator that applies
the rules and procedures of encoding (discussed later). Each code word sent to the
receiver may change during transmission. If the received code word is the same as
one of the valid code words, the word is accepted; the corresponding data word is
extracted for use.
If the received code word is not valid, it is discarded. However, if the code word is
corrupted during transmission but the received word still matches a valid code
word, the error remains undetected. This type of coding can detect only single
errors. Two or more errors may remain undetected.

For example consider the following table of data words and Code words:

Assume the sender encodes the data word 01 as 011 and sends it to the receiver.
Consider the following cases:

1. The receiver receives O11. It is a valid code word. The receiver extracts the data
word 01 from it.

2. The code word is corrupted during transmission, and 111 is received (the
leftmost bit is corrupted). This is not a valid code word and is discarded.

3. The code word is corrupted during transmission, and 000 is received (the right
two bits are corrupted). This is a valid code word. The receiver incorrectly extracts
the data word 00. Two corrupted bits have made the error undetectable.

Error Correction:
Error correction is much more difficult than error detection. In error detection, the
receiver needs to know only that the received code word is invalid, in error
correction the receiver needs to find (or guess) the original code word sent. So, we
need more redundant bits for error correction than for error detection.

Consider the following table of Data words and Code words.


Assume the data word is 01. The sender consults the table (or uses an algorithm) to
create the code word 01011. The code word is corrupted during transmission, and
01001 is received (error in the second bit from the right). First, the receiver finds
that the received code word is not in the table. This means an error has occurred.
(Detection must come before correction.) The receiver, assuming that there is only
1 bit corrupted, uses the following strategy to guess the correct data word.

1. Comparing the received code word with the first code word in the table (01001
versus 00000), the receiver decides that the first code word is not the one that was
sent because there are two different bits.

2. By the same reasoning, the original code word cannot be the third or fourth one
in the table.

3. The original code word must be the second one in the table because this is the
only one that differs from the received code word by 1 bit. The receiver replaces
01001 with 01011 and consults the table to find the data word 01.

Hamming Distance
Hamming distance is a metric for comparing two binary data strings. While
comparing two binary strings of equal length, Hamming distance is the number of
bit positions in which the two bits are different.
The Hamming distance between two strings, a and b is denoted as d(a,b).
It is used for error detection or error correction when data is transmitted over
computer networks. It is also using in coding theory for comparing equal length
data words.
Calculation of Hamming Distance
In order to calculate the Hamming distance between two strings, and , we perform
their XOR operation, (a⊕ b), and then count the total number of 1s in the resultant
string.
Example
Suppose there are two strings 1101 1001 and 1001 1101.
11011001 ⊕ 10011101 = 01000100. Since, this contains two 1s, the Hamming
distance, d(11011001, 10011101) = 2.
Minimum Hamming Distance
In a set of strings of equal lengths, the minimum Hamming distance is the smallest
Hamming distance between all possible pairs of strings in that set.
Example
Suppose there are four strings 010, 011, 101 and 111.
010 ⊕ 011 = 001, d(010, 011) = 1.
010 ⊕ 101 = 111, d(010, 101) = 3.
010 ⊕ 111 = 101, d(010, 111) = 2.
011 ⊕ 101 = 110, d(011, 101) = 2.
011 ⊕ 111 = 100, d(011, 111) = 1.
101 ⊕ 111 = 010, d(011, 111) = 1.
Hence, the Minimum Hamming Distance, dmin = 1.

Cyclic Redundancy Checks (CRC)


The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-
Detection and Correction. It is given as a kbit message and the transmitter creates
an (n – k) bit sequence called frame check sequence. The out coming frame,
including n bits, is precisely divisible by some fixed number. Modulo 2 Arithmetic
is used in this binary addition with no carries, just like the XOR operation.
Redundancy means duplicacy. The redundancy bits used by CRC are changed by
splitting the data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
 It should have accurately one less bit than the divisor.
 Joining it to the end of the data unit should create the resulting bit sequence
precisely divisible by the divisor.
CRC generator and checker

Process
 A string of n 0s is added to the data unit. The number n is one smaller than
the number of bits in the fixed divisor.
 The new data unit is divided by a divisor utilizing a procedure known as
binary division; the remainder appearing from the division is CRC.
 The CRC of n bits interpreted in phase 2 restores the added 0s at the end of
the data unit.
Example:
Message D = 1010001101 (10 bits)
Predetermined P = 110101 (6 bits)
FCS R = to be calculated 5 bits
Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 25:accommodating 1010001101000
The product is divided by P.
The remainder is inserted to 25D to provide T = 101000110101110 that is sent.
Suppose that there are no errors, and the receiver gets T perfect. The received
frame is divided by P.
Because of no remainder, there are no errors.

Flow control and Error control


Flow control and Error control are the two main responsibilities of the Data link
layer. Let us understand what these two terms specify. For the node-to-node
delivery of the data, the flow and error control are done at the data link layer.

Flow Control mainly coordinates with the amount of data that can be sent before
receiving an acknowledgment from the receiver and it is one of the major duties of
the data link layer.

 For most of the protocols, flow control is a set of procedures that mainly
tells the sender how much data the sender can send before it must wait for
an acknowledgment from the receiver.
 The data flow must not be allowed to overwhelm the receiver; because any
receiving device has a very limited speed at which the device can process the
incoming data and the limited amount of memory to store the incoming data.
 The processing rate is slower than the transmission rate; due to this reason
each receiving device has a block of memory that is commonly known
as buffer, that is used to store the incoming data until this data will be
processed. In case the buffer begins to fillup then the receiver must be able
to tell the sender to halt the transmission until once again the receiver
become able to receive.
 Thus the flow control makes the sender; wait for the acknowledgment from
the receiver before the continuation to send more data to the receiver.
 Some of the common flow control techniques are: Stop-and-Wait and sliding
window technique.
 Error Control contains both error detection and error correction. It mainly
allows the receiver to inform the sender about any damaged or lost frames
during the transmission and then it coordinates with the retransmission of
those frames by the sender.
 The term Error control in the data link layer mainly refers to the methods of
error detection and retransmission. Error control is mainly implemented in a
simple way and that is whenever there is an error detected during the
exchange, then specified frames are retransmitted and this process is also
referred to as Automatic Repeat request(ARQ).
Protocols

 The implementation of protocols is mainly implemented in the software by


using one of the common programming languages. The classification of the
protocols can be mainly done on the basis of where they are being used.
 Protocols can be used for noiseless channels(that is error-free) and also
used for noisy channels(that is error-creating). The protocols used for
noiseless channels mainly cannot be used in real-life and are mainly used to
serve as the basis for the protocols used for noisy channels.

 All the above-given protocols are unidirectional in the sense that the data
frames travel from one node i.e Sender to the other node i.e receiver.
 The special frames called acknowledgment (ACK) and negative
acknowledgment (NAK) both can flow in opposite direction for flow and
error control purposes and the data can flow in only one direction.
 But in the real-life network, the protocols of the data link layer are
implemented as bidirectional which means the flow of the data is in both
directions. And in these protocols, the flow control and error control
information such as ACKs and NAKs are included in the data frames in a
technique that is commonly known as piggybacking.
 Also, bidirectional protocols are more complex than the unidirectional
protocol.
Stop and Wait ARQ
Characteristics

 Used in Connection-oriented communication.


 It offers error and flows control
 It is used in Data Link and Transport Layers
 Stop and Wait for ARQ mainly implements the Sliding Window Protocol
concept with Window Size 1

Useful Terms:

 Propagation Delay: Amount of time taken by a packet to make a physical


journey from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
 RoundTripTime (RTT) = 2* Propagation Delay
 TimeOut (TO) = 2* RTT
 Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 180 seconds)

Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the
previous.

Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)
Problems :
1. Lost Data

2. Lost Acknowledgement:

3. Delayed Acknowledgement/Data: After a timeout on the sender side, a long-


delayed acknowledgement might be wrongly considered as acknowledgement of
some other recent packet.

Stop and Wait for ARQ (Automatic Repeat Request)

The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat
Request) that does both error control and flow control.
1. Time Out:

2. Sequence Number (Data)

3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.

Working of Stop and Wait for ARQ:

1) Sender A sends a data frame or packet with sequence number 0.


2) Receiver B, after receiving the data frame, sends an acknowledgement with
sequence number 1 (the sequence number of the next expected data frame or
packet)
There is only a one-bit sequence number that implies that both sender and receiver
have a buffer for one frame or packet only.

Characteristics of Stop and Wait ARQ:


 It uses a link between sender and receiver as a half-duplex link
 Throughput = 1 Data packet/frame per RTT
 If the Bandwidth*Delay product is very high, then they stop and wait for
protocol if it is not so useful. The sender has to keep waiting for
acknowledgements before sending the processed next packet.
 It is an example for “Closed Loop OR connection-oriented “ protocols
 It is a special category of SWP where its window size is 1
 Irrespective of the number of packets sender is having stop and wait for
protocol requires only 2 sequence numbers 0 and 1

The Stop and Wait ARQ solves the main three problems but may cause big
performance issues as the sender always waits for acknowledgement even if it has
the next packet ready to send. Consider a situation where you have a high
bandwidth connection and propagation delay is also high (you are connected to
some server in some other country through a high-speed connection). To solve this
problem, we can send more than one packet at a time with a larger sequence
number.
So Stop and Wait ARQ may work fine where propagation delay is very less for
example LAN connections but performs badly for distant connections like satellite
connections.

Sliding Window Protocol


It is actually a theoretical concept in which we have only talked about what should
be the sender window size (1+2a) in order to increase the efficiency of stop and
wait arq. Now we will talk about the practical implementations in which we take
care of what should be the size of receiver window. Practically it is implemented in
two protocols namely :
1. Go Back N (GBN)
2. Selective Repeat (SR)

Go Back N (GBN) Protocol


The three main characteristic features of GBN are:
1. Sender Window Size (WS)
It is N itself. If we say the protocol is GB10, then Ws = 10. N should be always
greater than 1 in order to implement pipelining. For N = 1, it reduces to Stop
and Wait protocol.

2. Efficiency Of GBN = N/(1+2a)


where a = Tp/Tt
If B is the bandwidth of the channel, then
Effective Bandwidth or Throughput
= Efficiency * Bandwidth
= (N/(1+2a)) * B
3. Receiver Window Size (WR)
WR is always 1 in GBN.

Now what exactly happens in GBN, we will explain with a help of example.
Consider the diagram given below. We have sender window size of 4. Assume
that we have lots of sequence numbers just for the sake of explanation. Now the
sender has sent the packets 0, 1, 2 and 3. After acknowledging the packets 0 and
1, receiver is now expecting packet 2 and sender window has also slided to
further transmit the packets 4 and 5. Now suppose the packet 2 is lost in the
network, Receiver will discard all the packets which sender has transmitted
after packet 2 as it is expecting sequence number of 2. On the sender side for
every packet send there is a time out timer which will expire for packet number
2. Now from the last transmitted packet 5 sender will go back to the packet
number 2 in the current window and transmit all the packets till packet number
5. That’s why it is called Go Back N. Go back means sender has to go back N
places from the last transmitted packet in the unacknowledged window and not
from the point where the packet is lost.

4. Acknowledgements
There are 2 kinds of acknowledgements namely:
 Cumulative Ack: One acknowledgement is used for many packets. The
main advantage is traffic is less. A disadvantage is less reliability as if one
ack is the loss that would mean that all the packets sent are lost.
 Independent Ack: If every packet is going to get acknowledgement
independently. Reliability is high here but a disadvantage is that traffic is
also high since for every packet we are receiving independent ack.
GBN uses Cumulative Acknowledgement. At the receiver side, it starts a
acknowledgement timer whenever receiver receives any packet which is fixed
and when it expires, it is going to send a cumulative Ack for the number of
packets received in that interval of timer. If receiver has received N packets,
then the Acknowledgement number will be N+1. Important point is
Acknowledgement timer will not start after the expiry of first timer but after
receiver has received a packet.
Time out timer at the sender side should be greater than Acknowledgement
timer.
Relationship Between Window Sizes and Sequence Numbers
We already know that sequence numbers required should always be equal to the
size of window in any sliding window protocol.
Minimum sequence numbers required in GBN = N + 1
Bits Required in GBN = ceil(log2 (N + 1))

The extra 1 is required in order to avoid the problem of duplicate packets


as described below.
Example: Consider an example of GB4.
 Sender window size is 4 therefore we require a minimum of 4 sequence
numbers to label each packet in the window.
 Now suppose receiver has received all the packets(0, 1, 2 and 3 sent by sender)
and hence is now waiting for packet number 0 again (We can not use 4 here as
we have only 4 sequence numbers available since N = 4).
 Now suppose the cumulative ack for the above 4 packets is lost in the network.
 On sender side, there will be timeout for packet 0 and hence all the 4 packets
will be transmitted again.
 Problem now is receiver is waiting for new set of packets which should have
started from 0 but now it will receive the duplicate copies of the previously
accepted packets.
 In order to avoid this, we need one extra sequence number.
 Now the receiver could easily reject all the duplicate packets which were
starting from 0 because now it will be waiting for packet number 4 (We have
added an extra sequence number now).
This is explained with the help of the illustrations below.
Trying with Sequence numbers 4.
Now Trying with one extra Sequence Number.

Now it is clear as to why we need an extra 1 bit in the GBN protocol.

Selective Repeat
Why Selective Repeat Protocol?

The go-back-n protocol works well if errors are less, but if the line is poor it wastes
a lot of bandwidth on retransmitted frames. An alternative strategy, the selective
repeat protocol, is to allow the receiver to accept and buffer the frames following a
damaged or lost one.
Selective Repeat attempts to retransmit only those packets that are actually lost
(due to errors) :
 Receiver must be able to accept packets out of order.
 Since receiver must release packets to higher layer in order, the receiver must
be able to buffer some packets.

Retransmission requests :
 Implicit – The receiver acknowledges every good packet, packets that are not
ACKed before a time-out are assumed lost or in error.Notice that this approach
must be used to be sure that every packet is eventually received.
 Explicit – An explicit NAK (selective reject) can request retransmission of just
one packet. This approach can expedite the retransmission but is not strictly
needed.
 One or both approaches are used in practice.

Selective Repeat Protocol (SRP) :

This protocol(SRP) is mostly identical to GBN protocol, except that buffers are
used and the receiver, and the sender, each maintains a window of size. SRP works
better when the link is very unreliable. Because in this case, retransmission tends to
happen more frequently, selectively retransmitting frames is more efficient than
retransmitting all of them. SRP also requires full-duplex link. backward
acknowledgments are also in progress.
 Sender’s Windows ( Ws) = Receiver’s Windows ( Wr).
 Window size should be less than or equal to half the sequence number in SR
protocol. This is to avoid packets being recognized incorrectly. If the size of the
window is greater than half the sequence number space, then if an ACK is lost,
the sender may send new packets that the receiver believes are retransmissions.
 Sender can transmit new packets as long as their number is with W of all
unACKed packets.
 Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if NAK
is employed.
 Receiver ACKs all correct packets.
 Receiver stores correct packets until they can be delivered in order to the higher
layer.
 In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2^m.
Figure – the sender only retransmits frames, for which a NAK is received
Efficiency of Selective Repeat Protocol (SRP) is same as GO-Back-N’s efficiency:
Efficiency = N/(1+2a)
Where a = Propagation delay / Transmission delay
Buffers = N + N
Sequence number = N(sender side) + N ( Receiver Side)

Piggybacking
Networking Communication :

Sliding window algorithms are methods of flow control for network data transfer. The
data link layer uses a sender to have more than one acknowledgment packet at a time,
which improves network throughput. Both the sender and receiver maintain a finite-size
buffer to hold outgoing and incoming packets from the other side. Every packet sends by
the sender must be acknowledged by the receiver. The sender maintains a timer for every
packet sent, and any packet unacknowledged at a certain time is resent. The sender may
send a whole window of packets before receiving an acknowledgment for the first packet
in the window. This results in higher transfer rates, as the sender may send multiple
packets without waiting for each packet’s acknowledgment. The receiver advertises a
window size that tells the sender not to fill up the receiver buffers.

Efficiency can also be improved by making use of full-duplex transmission. Full Duplex
transmission is a two-way directional communication simultaneously. It provides better
performance than simple and half-duplex transmission modes.
Full-duplex transmission

Solution 1 –
One way to achieve full-duplex transmission is to have two separate channels with one
for forwarding data transmission and the other for reverse data transfer (to accept). But
this will almost completely waste the bandwidth of the reverse channel.

Solution 2(Piggybacking) –
A preferable solution would be to use each channel to transmit the frame (front and back)
both ways, with both channels having the same capacity. Assume that A and B are users.
Then the data frames from A to B are interconnected with the acknowledgment from A to
B. and can be identified as a data frame or acknowledgment by checking the sort field in
the header of the received frame.
One more improvement can be made. When a data frame arrives, the receiver waits does
not send the control frame (acknowledgment) back immediately. The receiver waits until
its network layer moves to the next data packet.
Acknowledgment is associated with this outgoing data frame. Thus the acknowledgment
travels along with the next data frame.

Definition of Piggybacking :
This technique in which the outgoing acknowledgment is delayed temporarily is
called piggybacking.

As we can see in the figure, we can see with piggybacking, a single message (ACK +
DATA) over the wire in place of two separate messages. Piggybacking improves the
efficiency of the bidirectional protocols.
Advantages of piggybacking :
1. The major advantage of piggybacking is the better use of available channel
bandwidth. This happens because an acknowledgment frame needs not to be sent
separately.
2. Usage cost reduction
3. Improves latency of data transfer

Disadvantages of piggybacking :
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the acknowledgment (block the
ACK for some time), the frame will rebroadcast.

Multiple Access Protocols

The Data Link Layer is responsible for transmission of data between two nodes.
Its main functions are-
 Data Link Control
 Multiple Access Control

Data Link control –


The data link control is responsible for reliable transmission of message over
transmission channel by using techniques like framing, error control and flow
control. For Data link control refer to – Stop and Wait ARQ

Multiple Access Control –


If there is a dedicated link between the sender and the receiver then data link
control layer is sufficient, however if there is no dedicated link present then
multiple stations can access the channel simultaneously. Hence multiple access
protocols are required to decrease collision and avoid crosstalk. For example, in a
classroom full of students, when a teacher asks a question and all the students (or
stations) start answering simultaneously (send data at same time) then a lot of
chaos is created ( data overlap or data lost) then it is the job of the teacher
(multiple access protocols) to manage the students and make them answer one at
a time.
Thus, protocols are required for sharing data on non-dedicated channels. Multiple
access protocols can be subdivided further as –

1. Random Access Protocol: In this, all stations have same superiority that is no
station has more priority than another station. Any station can send data
depending on medium’s state ( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared
medium. In this, multiple stations can transmit data at the same time and can
hence lead to collision and data being garbled.
 Pure Aloha:
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
 Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1

S.No.Pure Aloha Slotted Aloha


In this, any station can transmit the
In this aloha, any station can data at the beginning of any time
1. transmit the data at any time. slot.

In this, The time is continuous In this, The time is discrete and


2. and not globally synchronized. globally synchronized.

Vulnerable time for pure aloha Vulnerable time for Slotted aloha
3. = 2 x Tt = Tt

In Pure Aloha, Probability of In Slotted Aloha, Probability of


successful transmission of data successful transmission of data
packet packet
4. = G x e-2G = G x e-G

In pure aloha, Maximum In slotted aloha, Maximum


efficiency efficiency
= 18.4% = 36.8%
5.

Slotted aloha reduces the number of


Pure aloha doesn’t reduces the collisions to half and doubles the
6. number of collisions to half. efficiency of pure aloha.
CSMA/CD
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network
protocol for carrier transmission that operates in the Medium Access Control
(MAC) layer. It senses or listens whether the shared channel for transmission is
busy or not, and defers transmissions until the channel is free. The collision
detection technology detects collisions by sensing transmissions from other
stations. On detection of a collision, the station stops transmitting, sends a jam
signal, and then waits for a random time interval before retransmission.
Algorithms
The algorithm of CSMA/CD is:
 When a frame is ready, the transmitting station checks whether the channel
is idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station starts transmitting and continually monitors
the channel to detect collision.
 If a collision is detected, the station starts the collision resolution algorithm.
 The station resets the retransmission counters and completes frame
transmission.
The algorithm of Collision Resolution is:
 The station continues transmission of the current frame for a specified time
along with a jam signal, to ensure that all the other stations detect collision.
 The station increments the retransmission counter.
 If the maximum number of retransmission attempts is reached, then the
station aborts transmission.
 Otherwise, the station waits for a backoff period which is generally a
function of the number of collisions and restart main algorithm.
The following flowchart summarizes the algorithms:
 Though this algorithm detects collisions, it does not reduce the number of
collisions.
 It is not appropriate for large networks performance degrades exponentially
when more stations are added.

CSMA/CA

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network


protocol for carrier transmission that operates in the Medium Access Control
(MAC) layer. In contrast to CSMA/CD (Carrier Sense Multiple Access/Collision
Detection) that deals with collisions after their occurrence, CSMA/CA prevents
collisions prior to their occurrence.
The algorithm of CSMA/CA is:
 When a frame is ready, the transmitting station checks whether the channel
is idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station waits for an Inter-frame gap (IFG) amount
of time and then sends the frame.
 After sending the frame, it sets a timer.
 The station then waits for acknowledgement from the receiver. If it receives
the acknowledgement before expiry of timer, it marks a successful
transmission.
 Otherwise, it waits for a back-off time period and restarts the algorithm.
The following flowchart summarizes the algorithms:
Advantages of CSMA/CD
 CMSA/CA prevents collision.
 Due to acknowledgements, data is not lost unnecessarily.
 It avoids wasteful transmission.
 It is very much suited for wireless transmissions.
Disadvantages of CSMA/CD
 The algorithm calls for long waiting times.
 It has high power consumption.

You might also like