Unit 2 Net

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

1.

Data or information can be stored in two ways, analog and digital. For a computer to use
the data, it must be in discrete digital form.Similar to data, signals can also be in analog and
digital form. To transmit data digitally, it needs to be first converted to digital form.
Digital-to-Digital Conversion
This section explains how to convert digital data into digital signals. It can be done in two
ways, line coding and block coding. For all communications, line coding is necessary
whereas block coding is optional.
Line Coding
The process for converting digital data into digital signal is said to be Line Coding. Digital
data is found in binary format.It is represented (stored) internally as series of 1s and 0s.
Digital signal is denoted by discreet signal, which represents digital data.There are three
types of line coding schemes.
Block Coding
To ensure accuracy of the received data frame redundant bits are used. For example, in
even-parity, one parity bit is added to make the count of 1s in the frame even. This way the
original number of bits is increased. It is called Block Coding.
Block coding is represented by slash notation, mB/nB.Means, m-bit block is substituted with
n-bit block where n > m. Block coding involves three steps:

• Division,

• Substitution

• Combination.
After block coding is done, it is line coded for transmission.
Analog-to-Digital Conversion
Microphones create analog voice and camera creates analog videos, which are treated is
analog data. To transmit this analog data over digital signals, we need analog to digital
conversion.
Analog data is a continuous stream of data in the wave form whereas digital data is discrete.
To convert analog wave into digital data, we use Pulse Code Modulation (PCM).
Transmission Modes
The transmission mode decides how data is transmitted between two computers. The
binary data in the form of 1s and 0s can be sent in two different modes: Parallel and Serial.
Parallel Transmission
The binary bits are organized in-to groups of fixed length. Both sender and receiver are
connected in parallel with the equal number of data lines. Both computers distinguish
between high order and low order data lines. The sender sends all the bits at once on all
lines. Because the data lines are equal to the number of bits in a group or data frame, a
complete group of bits (data frame) is sent in one go. Advantage of Parallel transmission is
high speed and disadvantage is the cost of wires, as it is equal to the number of bits sent in
parallel.
Serial Transmission
In serial transmission, bits are sent one after another in a queue manner. Serial
transmission requires only one communication channel.
Asynchronous Serial Transmission
It is named so because there’is no importance of timing. Data-bits have specific pattern and
they help receiver recognize the start and end data bits. For example, a 0 is prefixed on
every data byte and one or more 1s are added at the end.
Two continuous data-frames (bytes) may have a gap between them.
Synchronous Serial Transmission
Timing in synchronous transmission has importance as there is no mechanism followed to
recognize start and end data bits. There is no pattern or prefix/suffix method. Data bits are
sent in burst mode without maintaining gap between bytes (8-bits). Single burst of data bits
may contain a number of bytes. Therefore, timing becomes very important.
It is up to the receiver to recognize and separate bits into bytes. The advantage of
synchronous transmission is high speed, and it has no overhead of extra header and footer
bits as in asynchronous transmission.
The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-Detection and
Correction. It is given as a kbit message and the transmitter creates an (n – k) bit sequence
called frame check sequence. The out coming frame, including n bits, is precisely divisible by
some fixed number. Modulo 2 Arithmetic is used in this binary addition with no carries, just
like the XOR operation.
Redundancy means duplicacy. The redundancy bits used by CRC are changed by splitting
the data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
It should have accurately one less bit than the divisor.
Joining it to the end of the data unit should create the resulting bit sequence precisely
divisible by the divisor.
CRC generator and checker
Process
A string of n 0s is added to the data unit. The number n is one smaller than the number of
bits in the fixed divisor.
The new data unit is divided by a divisor utilizing a procedure known as binary division; the
remainder appearing from the division is CRC.
The CRC of n bits interpreted in phase 2 restores the added 0s at the end of the data unit.
Errors and Error Detection
When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data being
received by the receiver and are called errors.
Error detection techniques are responsible for checking whether any error has occurred or
not in the frame that has been transmitted via network. It does not take into account the
number of error bits and the type of error.
For error detection, the sender needs to send some additional bits along with the data bits.
The receiver performs necessary checks based upon the additional redundant bits. If it finds
that the data is free from errors, it removes the redundant bits before passing the message
to the upper layers.
There are three main techniques for detecting errors in frames: Parity Check, Checksum and
Cyclic Redundancy Check (CRC).
Checksums
This is a block code method where a checksum is created based on the data values in the
data blocks to be transmitted using some algorithm and appended to the data. When the
receiver gets this data, a new checksum is calculated and compared with the existing
checksum. A non-match indicates an error.
Error Detection by Checksums
For error detection by checksums, data is divided into fixed sized frames or segments.
Sender’s End − The sender adds the segments using 1’s complement arithmetic to get the
sum. It then complements the sum to get the checksum and sends it along with the data
frames.
Receiver’s End − The receiver adds the incoming segments along with the checksum using
1’s complement arithmetic to get the sum and then complements it.
If the result is zero, the received frames are accepted; otherwise they are discarded.
Example
Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are
11001100, 10101010, 11110000 and 11000011.
The sender adds the bits using 1s complement arithmetic. While adding two numbers using
1s complement arithmetic, if there is a carry over, it is added to the sum.
After adding all the 4 frames, the sender complements the sum to get the checksum,
11010011, and sends it along with the data frames.
The receiver performs 1s complement arithmetic sum of all the frames including the
checksum. The result is complemented and found to be 0. Hence, the receiver assumes that
no error has occurred.

2. Framing in Data Link Layer


Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the
case of light energy. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consists of a wire
in which data is transmitted as a stream of bits. However, these bits must be framed into
discernible blocks of information. Framing is a function of the data link layer. It provides a
way for a sender to transmit a set of bits that are meaningful to the receiver. Ethernet, token
ring, frame relay, and other data link layer technologies have their own frame structures.
Frames have headers that contain information such as error-checking codes.
At the data link layer, it extracts the message from the sender and provides it to the receiver
by providing the sender’s and receiver’s addresses. The advantage of using frames is that
data is broken up into recoverable chunks that can easily be checked for corruption.
Problems in Framing
Detecting start of the frame: When a frame is transmitted, every station must be able to
detect it. Station detects frames by looking out for a special sequence of bits that marks the
beginning of the frame i.e. SFD (Starting Frame Delimiter).
How does the station detect a frame: Every station listens to link for SFD pattern through a
sequential circuit. If SFD is detected, sequential circuit alerts station. Station checks
destination address to accept or reject frame.
Detecting end of frame: When to stop reading the frame.
Types of framing – There are two types of framing:
1. Fixed size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
Drawback: It suffers from internal fragmentation if the data size is less than the frame size
Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the
beginning of the next frame to distinguish. This can be done in two ways:
Length field – We can introduce a length field in the frame to indicate the length of the
frame. Used in Ethernet(802.3). The problem with this is that sometimes the length field
might get corrupted.
End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used
in Token Ring. The problem with this is that ED can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains ED then,
a byte is stuffed into data to differentiate it from ED.
Disadvantage – It is very costly and obsolete method.
2. Bit Stuffing: Let ED = 01111 and if data = 01111

• Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
• Receiver receives the frame.
• If data contains 011101, receiver removes the 0 and reads the data.
Examples
If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing? –>
011010001101100
Error Control in Data Link Layer
Data-link layer uses the techniques of error control simply to ensure and confirm that all the
data frames or packets, i.e. bit streams of data, are transmitted or transferred from sender
to receiver with certain accuracy. Using or providing error control at this data link layer is an
optimization, it was never requirement. Error control is basically process in data link layer of
detecting or identifying and re-transmitting data frames that might be lost or corrupted
during transmission.
In both of these cases, receiver or destination does not receive correct data-frame and
sender or source does not even know anything about any such loss regarding data frames.
Therefore, in such type of cases, both sender and receiver are provided with some essential
protocols that are required to detect or identify such type of errors like loss of data frames.
The Data-link layer follows technique known as re-transmission of frames to detect or
identify transit errors and also to take necessary actions that are required to reduce or
remove such errors.
Each and every time an effort is detected during transmission, particular data frames
retransmitted and this process is known as ARQ (Automatic Repeat Request).
Error Detection: Error detection, as name suggests, simply means detection or identification
of errors. These errors may cause due to noise or any other impairments during
transmission from transmitter to the receiver, in communication system. It is class of
technique for detecting garbled i.e. unclear and distorted data or message.
Error Correction: Error correction, as name suggests, simply means correction or solving or
fixing of errors. It simply means reconstruction and rehabilitation of original data that is
error-free. But error correction method is very costly and is very hard.
Noiseless Channel Protocol
Introduction: A protocol is a set of rules used by two devices to communicate. These sets of
rules are usually decided by headers (fixed headers determined by the protocol). These
headers specify the content of the message and the way this message is processed. To
detect the error, the header must be the address of the destination, the address of the
source, the checksum of the message.
Categorization of protocol: The exploration of protocols is split into those that can be applied
for noiseless(error-free) channels and those that can be used for noisy(error-causing)
channels. The first category of protocols cannot be used in actual life, but they serve as a
basis for protocols for noise channels.
Noiseless Channel: An idealistic channel in which no frames are lost, corrupted or
duplicated. The protocol does not implement error control in this category. There are two
protocols for the noiseless channel as follows.
Simplest Protocol
We consider here that the receiver can maintain any frame received with insignificant
processing time. The receiver’s data link layer immediately removes the header from the
frame and assigns the data packet to its network layer, which can also accept the packet
immediately. That is to say, the receiver can never be overwhelmed with forthcoming
frames.
Design
The data link layer at the sender site gets data from its network layer, makes a frame out of
the data, and sends it. The data link layer(receiver site) receives a frame from its physical
layer, extracts data from the frame, and convey the data to its network layer. The data link
layers of the sender and receiver provide communication/transmission services for their
network layers. The data link layers utilization the services provided by their physical layers
for the physical transmission of bits.
Sender-site and Receivers algorithms
Sender-site algorithm
while(true) //Repeat forever
{
waitForEvent(); //sleep until an event occur
if (Event(RequestToSend)) //there is a packet to send
{
GetData();
MakeFrame();
SendFrame(); //send the frame
}
}

Receivers algorithm
while(true) //Repeat forever
{
waitForEvent(); //sleep until an event occur
if (Event(ArrivalNotification)) //data frame arrived
{
ReceiveFrame();
ExtractData();
DeliverData(); //Deliver data to network layer
}
}
Flow Diagram
This Flow Diagram shows an example of communication using the simplest protocol. It is
very straightforward. The sender sends a series of frames without further consideration
about the receiver. Let’s take an example, three frames will send from the sender, and three
frames received by receivers. Bear in mind the data frames are shown by tilted boxes; the
height of the box defines the transmission time difference between the first bit and the last
bit in the frame.
Stop and Wait Protocol
If data frame receivers arrive at the site faster than they can be processed, then frames
must be stored until their use. Generally, the receiver does not have enough storage space,
especially if it is receiving data from multiple sources.
Design
On Comparing the stop-and-wait protocol design model with the Simplest protocol design
model, we can see the traffic on the front/forward channel (from sender to receiver) and the
back/reverse channel. Anytime, there is either one data frame on the forward channel or
one ACK frame on the reverse channel. We hereupon require a half-duplex link.
Sender-site and Receivers algorithms
Sender-site algorithm
while(true) //Repeat forever
canSend = true // Allow the first frame to go
{
waitForEvent(); //sleep until an event occur
if (Event(RequestToSend)AND canSend) //there is a packet to send
{
GetData();
MakeFrame();
SendFrame(); //send the data frame
canSend = false; //cannot send until ACK arrives
}
WaitForEvent(); //sleep until an event occurs
if(Event(ArrivalNotification)) //An ACK has arrived
{
ReceiveFrame(); //Receive the ACK frame
CanSend = true;
}

Receivers algorithm
while(true) //Repeat forever
{
waitForEvent(); //sleep until an event occur
if (Event(ArrivalNotification) //data frame arrives
{
ReceiveFrame();
ExtractData();
DeliverData(); //Deliver data to network layer
SendFrame(); //Send an ACK frame
}
}

Flow Diagram
This figure shows an example of communication using the Stop-and-wait protocol. It is still
straightforward. The sender sends a frame and waits for a response from the receiver.
When ACK(acknowledged) will arrival from the receiver side then send the next frame and
so on. Keep it in mind when two frames will be there sender will involve in four event and
the receivers will involve in two events.

3. Noisy Channels
Stop and Wait ARQ
Used in Connection-oriented communication. It offers error and flows control. It is used in
Data Link and Transport Layers. Stop and Wait for ARQ mainly implements the Sliding
Window Protocol concept with Window Size 1
Useful Terms:
Propagation Delay: Amount of time taken by a packet to make a physical journey from one
router to another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
RoundTripTime (RTT) = 2* Propagation Delay
TimeOut (TO) = 2* RTT
Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 180 seconds)
Simple Stop and Wait
Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send the next packet only after receiving acknowledgement for the previous.
Receiver:
Rule 1) Send acknowledgement after receiving and consuming a data packet.
Rule 2) After consuming packet acknowledgement need to be sent (Flow Control)

Problems:
1. Lost Data
2. Lost Acknowledgement:
3. Delayed Acknowledgement/Data: After a timeout on the sender side, a long-delayed
acknowledgement might be wrongly considered as acknowledgement of some other recent
packet.
Stop and Wait for ARQ (Automatic Repeat Request)
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request)
that does both error control and flow control.
1. Time Out
2. Sequence Number (Data)
3. Delayed Acknowledgement: This is resolved by introducing sequence numbers for
acknowledgement also.
Working of Stop and Wait for ARQ:
1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with sequence
number 1 (the sequence number of the next expected data frame or packet). There is only a
one-bit sequence number that implies that both sender and receiver have a buffer for one
frame or packet only.

Characteristics of Stop and Wait ARQ:


It uses a link between sender and receiver as a half-duplex link
Throughput = 1 Data packet/frame per RTT
If the Bandwidth*Delay product is very high, then they stop and wait for protocol if it is not
so useful. The sender has to keep waiting for acknowledgements before sending the
processed next packet.
It is an example of “Closed Loop OR connection-oriented “ protocols
It is a special category of SWP where its window size is 1
Irrespective of the number of packets sender is having stop and wait for protocol requires
only 2 sequence numbers 0 and 1
The Stop and Wait ARQ solves the main three problems but may cause big performance
issues as the sender always waits for acknowledgement even if it has the next packet ready
to send. Consider a situation where you have a high bandwidth connection and propagation
delay is also high (you are connected to some server in some other country through a high-
speed connection). To solve this problem, we can send more than one packet at a time with
a larger sequence number. We will be discussing these protocols in the next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for example LAN
connections but performs badly for distant connections like satellite connections.
Go-Back-N ARQ
Before understanding the working of Go-Back-N ARQ, we first look at the sliding window
protocol. As we know that the sliding window protocol is different from the stop-and-wait
protocol. In the stop-and-wait protocol, the sender can send only one frame at a time and
cannot send the next frame without receiving the acknowledgment of the previously sent
frame, whereas, in the case of sliding window protocol, the multiple frames can be sent at a
time. The variations of sliding window protocol are Go-Back-N ARQ and Selective Repeat
ARQ. Let's understand 'what is Go-Back-N ARQ'.
What is Go-Back-N ARQ?
In Go-Back-N ARQ, N is the sender's window size. Suppose we say that Go-Back-3, which
means that the three frames can be sent at a time before expecting the acknowledgment
from the receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is
Go-Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2,
frame no 3 can be sent before expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame
from another frame, and these numbers are known as the sequential numbers.
The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an agreed-upon time period, then
all the frames available in the current window will be retransmitted. Suppose we have sent
the frame no 5, but we didn't receive the acknowledgment of frame no 5, and the current
window is holding three frames, then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the size of the sender's
window. Suppose the sender's window size is 2, and we have ten frames to send, then the
sequence numbers will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
N is the sender's window size. If the size of the sender's window is 4 then the sequence
number will be 0,1,2,3,0,1,2,3,0,1,2, and so on. The number of bits in the sequence number
is 2 to generate the binary sequence 00,01,10,11.
Working of Go-Back-N ARQ
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be
sent. These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence
numbers of the frames. Mainly, the sequence number is decided by the sender's window
size. But, for the better understanding, we took the running sequence numbers, i.e.,
0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4, which means that the four frames
can be sent at a time before expecting the acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now
the sender is expected to receive the acknowledgment of the 0th frame.
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the
receiver has successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four
frames (1,2,3,4).
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will
slide having four frames (2,3,4,5).
Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is
lost, or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back
to 2, which is the first frame of the current window, retransmits all the frames in the current
window, i.e., 2,3,4,5.
Important points related to Go-Back-N ARQ:

• In Go-Back-N, N determines the sender's window size, and the size of the
receiver's window is always 1.

• It does not consider the corrupted frames and simply discards them.

• It does not accept the frames which are out of order and discards them.

• If the sender does not receive the acknowledgment, it leads to the


retransmission of all the current window frames.

Selective Repeat ARQ


Why Selective Repeat Protocol? The go-back-n protocol works well if errors are less, but if
the line is poor it wastes a lot of bandwidth on retransmitted frames. An alternative
strategy, the selective repeat protocol, is to allow the receiver to accept and buffer the
frames following a damaged or lost one. Selective Repeat attempts to retransmit only those
packets that are actually lost (due to errors) :
Receiver must be able to accept packets out of order. Since receiver must release packets to
higher layer in order, the receiver must be able to buffer some packets.
Retransmission requests:
Implicit: The receiver acknowledges every good packet, packets that are not ACKed before a
time-out are assumed lost or in error.Notice that this approach must be used to be sure
that every packet is eventually received.
Explicit: An explicit NAK (selective reject) can request retransmission of just one packet. This
approach can expedite the retransmission but is not strictly needed.
One or both approaches are used in practice.
Selective Repeat Protocol (SRP): This protocol(SRP) is mostly identical to GBN protocol, except
that buffers are used and the receiver, and the sender, each maintains a window of size.
SRP works better when the link is very unreliable. Because in this case, retransmission tends
to happen more frequently, selectively retransmitting frames is more efficient than
retransmitting all of them. SRP also requires full-duplex link. backward acknowledgements
are also in progress.
Sender’s Windows (Ws) = Receiver’s Windows (Wr).
Window size should be less than or equal to half the sequence number in SR protocol. This
is to avoid packets being recognized incorrectly. If the size of the window is greater than half
the sequence number space, then if an ACK is lost, the sender may send new packets that
the receiver believes are retransmissions.
Sender can transmit new packets as long as their number is with W of all unACKed packets.
Sender retransmit un-ACKed packets after a timeout – Or upon a NAK if NAK is employed.
Receiver ACKs all correct packets. Receiver stores correct packets until they can be delivered
in order to the higher layer. In Selective Repeat ARQ, the size of the sender and receiver
window must be at most one-half of 2^m.
High-level Data Link Control (HDLC)
High-level Data Link Control (HDLC) is a group of communication protocols of the data link
layer for transmitting data between network points or nodes. Since it is a data link protocol,
data is organized into frames. A frame is transmitted via the network to the destination that
verifies its successful arrival. It is a bit - oriented protocol that is applicable for both point -
to - point and multipoint communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
Normal Response Mode (NRM): Here, two types of stations are there, a primary station that
send commands and secondary station that can respond to received commands. It is used
for both point - to - point and multipoint communications.
Asynchronous Balanced Mode (ABM): Here, the configuration is balanced, i.e. each station can
both send commands and respond to commands. It is used for only point - to - point
communications.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame. The fields of a HDLC frame are −
Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
Address − It contains the address of the receiver. If the frame is sent by the primary station,
it contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
Control − It is 1 or 2 bytes containing flow and error control information.
Payload − This carries the data from the network layer. Its length may vary from one
network to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)

Types of HDLC Frames


There are three types of HDLC frames. The type of frame is determined by the control field
of the frame
I-frame: I-frames or Information frames carry user data from the network layer. They also
include flow and error control information that is piggybacked on user data. The first bit of
control field of I-frame is 0.
S-frame: S-frames or Supervisory frames do not contain information field. They are used for
flow and error control when piggybacking is not required. The first two bits of control field
of S-frame is 10.
U-frame: U-frames or Un-numbered frames are used for myriad miscellaneous functions,
like link management. It may contain an information field, if required. The first two bits of
control field of U-frame is 11.
Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on
the channel.

• Aloha

• CSMA

• CSMA/CD

• CSMA/CA
ALOHA Random Access Protocol
It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
time, the station waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it retransmits the
frame until all the data are successfully transmitted to the receiver.
The total vulnerable time of pure Aloha is 2 * Tfr. Maximum throughput occurs when G = 1/
2 that is 18.4%. Successful transmission of data frame is S = G * e ^ - 2 G.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the beginning
of two or more station time slot.
Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%. The probability of
successfully transmitting the data frame in the slotted Aloha is S = G * e ^ - 2 G. The total
vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes


1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as
soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-
Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.

CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether
the channel is clear. If the station receives only a single (own) acknowledgments, that means
the data frame has been successfully transmitted to the receiver. But if it gets two signals
(its own and one more in which the collision of frames), a collision of the frame occurs in the
shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.

4. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA): The available bandwidth is divided into equal
bands so that each station can be allocated its own band. Guard bands are also added so
that no two bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA): In this, the bandwidth is shared between multiple
stations. To avoid collision time is divided into slots and stations are allotted these slots to
transmit data. However there is a overhead of synchronization as each station needs to
know its time slot. This is resolved by adding synchronization bits to each slot. Another issue
with TDMA is propagation delay which is resolved by addition of guard bands.
Code Division Multiple Access (CDMA): One channel carries all transmissions simultaneously.
There is neither division of bandwidth nor division of time. For example, if there are many
people in a room all speaking at the same time, then also perfect reception of data is
possible if only two person speak the same language. Similarly, data from different stations
can be transmitted simultaneously in different code languages.

Wired LANs / Ethernet


A local Area Network (LAN) is a data communication network connecting various terminals
or computers within a building or limited geographical area. The connection among the
devices could be wired or wireless. Ethernet, Token Ring and Wireless LAN using IEEE 802.11
are examples of standard LAN technologies.
LAN has the following topologies:

• Star Topology

• Bus Topology

• Ring Topology

• Mesh Topology
• Hybrid Topology

• Tree Topology
Ethernet:
Ethernet is the most widely used LAN technology, which is defined under IEEE standards
802.3. The reason behind its wide usability is Ethernet is easy to understand, implement,
maintain, and allows low-cost network implementation. Also, Ethernet offers flexibility in
terms of topologies that are allowed. Ethernet generally uses Bus Topology. Ethernet
operates in two layers of the OSI model, Physical Layer, and Data Link Layer. For Ethernet,
the protocol data unit is Frame since we mainly deal with DLL. In order to handle collision,
the Access control mechanism used in Ethernet is CSMA/CD. Manchester Encoding
Technique is used in Ethernet.
Since we are talking about IEEE 802.3 standard Ethernet, therefore, 0 is expressed by a high-
to-low transition, a 1 by the low-to-high transition. In both Manchester Encoding and
Differential Manchester, the Encoding Baud rate is double of bit rate.

Advantages of Ethernet:
Speed: When compared to a wireless connection, Ethernet provides significantly more
speed. Because Ethernet is a one-to-one connection, this is the case. As a result, speeds of
up to 10 Gigabits per second (Gbps) or even 100 Gigabits per second (Gbps) are possible.
Efficiency: An Ethernet cable, such as Cat6, consumes less electricity, even less than a wifi
connection. As a result, these ethernet cables are thought to be the most energy-efficient.
Good data transfer quality: Because it is resistant to noise, the information transferred is of
high quality.
Baud rate = 2* Bit rate
Ethernet LANs consist of network nodes and interconnecting media or links. The network
nodes can be of two types:
Data Terminal Equipment (DTE): Generally, DTEs are the end devices that convert the user
information into signals or reconvert the received signals. DTEs devices are: personal
computers, workstations, file servers or print servers also referred to as end stations. These
devices are either the source or the destination of data frames. The data terminal
equipment may be a single piece of equipment or multiple pieces of equipment that are
interconnected and perform all the required functions to allow the user to communicate. A
user can interact with DTE or DTE may be a user.
Data Communication Equipment (DCE): DCEs are the intermediate network devices that
receive and forward frames across the network. They may be either standalone devices
such as repeaters, network switches, routers, or maybe communications interface units
such as interface cards and modems. The DCE performs functions such as signal
conversion, coding, and maybe a part of the DTE or intermediate equipment.
Currently, these data rates are defined for operation over optical fibres and twisted-pair
cables:
i) Fast Ethernet: Fast Ethernet refers to an Ethernet network that can transfer data at a rate
of 100 Mbit/s.
ii) Gigabit Ethernet: Gigabit Ethernet delivers a data rate of 1,000 Mbit/s (1 Gbit/s).
iii) 10 Gigabit Ethernet: 10 Gigabit Ethernet is the recent generation and delivers a data
rate of 10 Gbit/s (10,000 Mbit/s). It is generally used for backbones in high-end applications
requiring high data rates.

Data Encoding Techniques


Encoding is the process of converting the data or a given sequence of characters, symbols,
alphabets etc., into a specified format, for the secured transmission of data. Decoding is the
reverse process of encoding which is to extract the information from the converted format.
Data Encoding
Encoding is the process of using various patterns of voltage or current levels to represent 1s
and 0s of the digital signals on the transmission link.
The common types of line encoding are Unipolar, Polar, Bipolar, and Manchester.
Encoding Techniques
The data encoding technique is divided into the following types, depending upon the type of
data conversion.
Analog data to Analog signals − The modulation techniques such as Amplitude Modulation,
Frequency Modulation and Phase Modulation of analog signals, fall under this category.
Analog data to Digital signals − This process can be termed as digitization, which is done by
Pulse Code Modulation PCM. Hence, it is nothing but digital modulation. As we have already
discussed, sampling and quantization are the important factors in this. Delta Modulation
gives a better output than PCM.
Digital data to Analog signals − The modulation techniques such as Amplitude Shift Keying
ASK, Frequency Shift Keying FSK, Phase Shift Keying PSK, etc., fall under this category. These
will be discussed in subsequent chapters.
Digital data to Digital signals − These are in this section. There are several ways to map digital
data to digital signals. Some of them are −
Non Return to Zero NRZ
NRZ Codes has 1 for High voltage level and 0 for Low voltage level. The main behavior of
NRZ codes is that the voltage level remains constant during bit interval. The end or start of a
bit will not be indicated and it will maintain the same voltage state, if the value of the
previous bit and the value of the present bit are same.
The following figure explains the concept of NRZ coding.
If the above example is considered, as there is a long sequence of constant voltage level and
the clock synchronization may be lost due to the absence of bit interval, it becomes difficult
for the receiver to differentiate between 0 and 1.
NRZ codes has a disadvantage that the synchronization of the transmitter clock with the
receiver clock gets completely disturbed, when there is a string of 1s and 0s. Hence, a
separate clock line needs to be provided.
Bi-phase Encoding
The signal level is checked twice for every bit time, both initially and in the middle. Hence,
the clock rate is double the data transfer rate and thus the modulation rate is also doubled.
The clock is taken from the signal itself. The bandwidth required for this coding is greater.
There are two types of Bi-phase Encoding.

• Bi-phase Manchester

• Differential Manchester

• Bi-phase Manchester
In this type of coding, the transition is done at the middle of the bit-interval. The transition
for the resultant pulse is from High to Low in the middle of the interval, for the input bit 1.
While the transition is from Low to High for the input bit 0.
Differential Manchester
In this type of coding, there always occurs a transition in the middle of the bit interval. If
there occurs a transition at the beginning of the bit interval, then the input bit is 0. If no
transition occurs at the beginning of the bit interval, then the input bit is 1.
The following figure illustrates the waveforms of NRZ-L, NRZ-I, Bi-phase Manchester and
Differential Manchester coding for different digital inputs.
Block Coding
Among the types of block coding, the famous ones are 4B/5B encoding and 8B/6T encoding.
The number of bits are processed in different manners, in both of these processes.
4B/5B Encoding
In Manchester encoding, to send the data, the clocks with double speed is required rather
than NRZ coding. Here, as the name implies, 4 bits of code is mapped with 5 bits, with a
minimum number of 1 bits in the group.
The clock synchronization problem in NRZ-I encoding is avoided by assigning an equivalent
word of 5 bits in the place of each block of 4 consecutive bits. These 5-bit words are
predetermined in a dictionary.
The basic idea of selecting a 5-bit code is that, it should have one leading 0 and it should
have no more than two trailing 0s. Hence, these words are chosen such that two
transactions take place per block of bits.
8B/6T Encoding
We have used two voltage levels to send a single bit over a single signal. But if we use more
than 3 voltage levels, we can send more bits per signal.
For example, if 6 voltage levels are used to represent 8 bits on a single signal, then such
encoding is termed as 8B/6T encoding. Hence in this method, we have as many as 729 3^6
combinations for signal and 256 2^8 combinations for bits.
These are the techniques mostly used for converting digital data into digital signals by
compressing or coding them for reliable transmission of data.
WiMax
What is WiMAX?
WiMAX stands for "Worldwide Interoperability for Microwave Access," a telecommunications
standard that describes fixed and fully mobile Internet access services. The protocol follows
some aspects of the IEEE 802.16 Standard.
WiMAX products and services are most likely to be found in "last mile" applications. WiMAX
enables ISPs and carriers to deliver Internet access to homes and businesses without the
need for physical cabling (copper, cable, etc.) to reach the customer's location.
Difference between WiMAX and WiFi
WiMAX is sometimes compared to WiFi because both technologies rely on wireless Internet
connectivity and are complementary.
Following are some of the major differences between WiMAX and WiFi −
WiMAX's range is measured in kilometers, but WiFi's range is measured in meters and is
only available locally. The reliability and range of WiMAX make it ideal for providing Internet
access to significant urban areas.
WiFi uses an unlicensed spectrum, whereas WiMAX uses a licensed or unlicensed band.
WiFi is increasingly being used by end-user devices such as laptops, desktops, and
cellphones. As a result, WiMAX service providers typically give a WiMAX subscriber unit to
the consumer. This device connects to the provider's network and provides customers with
WiFi access and convenience inside the WiFi range.
Architecture of WiMAX
The physical layer – The physical layer is in charge of signal encoding and decoding and bit
transmission and receiving. It turns MAC layer frames into transmittable signals. QPSK,
QAM-16, and QAM-64 are some of the modulation methods utilized on this layer.
MAC Layer – This layer serves as a link between the WiMax protocol stack's convergence and
physical layers. It is based on CSMA/CA and allows point-to-multipoint communication
(Carrier Sense Multiple Access with Collision Avoidance).
Convergence Layer – This layer provides information from the external network. It takes
higher-layer protocol data units (PDUs) and converts them into lower-layer PDUs. It has
different functions depending on whatever service is used.
Advantages of WiMAX
WiMAX offers the following benefits −
1. It allows for very high-speed voice and data transmission over extended distances.
2. Hundreds of users can be served by a single WiMAX BS.
3. It is seen as a less expensive alternative to broadband wired technologies such as ADSL,
cable modem, etc.
4. Higher speeds are possible.
With mobile WiMAX, you can get a more comprehensive coverage range and cellular-like
performance.
Disadvantages of WiMAX
WiMAX has the following drawbacks −
1. Subscribers located far away from the WiMAX BS require a LOS (Line of Sight) connection.
2. Bad weather, such as rain, will disrupt the WiMAX signal and frequently result in a loss of
connection.
3. Wimax is a power-hungry technology that necessitates a lot of electrical assistance.
4. It is not backward compatible with any wireless cellular technologies, so the initial cost of
starting a WiMax is very high.
5. WiMax BS and towers must be set up from scratch. Since skilled workforce is needed, it
results in significant starting expenses and higher operational expenditures.

You might also like