Computer Network Soft Copy (Rashmita Forozaun)
Computer Network Soft Copy (Rashmita Forozaun)
Computer Network Soft Copy (Rashmita Forozaun)
1. Physical Layer
2. Data Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
Physical Layer
This layer is the lowest layer in the OSI model. It helps in the transmission of data
between two machines that are communicating through a physical medium, which can be
optical fibres,copper wire or wireless etc. The following are the main functions of the
physical layer:
2. Encoding and Signalling: How are the bits encoded in the medium is also
decided by this layer. For example, on the coppar wire medium, we can use
differnet voltage levels for a certain time interval to represent '0' and '1'. We may
use +5mV for 1nsec to represent '1' and -5mV for 1nsec to represent '0'. All the
issues of modulation is dealt with in this layer. eg, we may use Binary phase shift
keying for the representation of '1' and '0' rather than using different volatage
levels if we have to transfer in RF waves.
3. Data Transmission and Reception: The transfer of each bit of data is the
responsibility of this layer. This layer assures the transmissoin of each bit with a
high probability. The transmission of the bits is not completely reliable as their is
no error correction in this layer.
4. Topology and Network Design: The network design is the integral part of the
physical layer. Which part of the network is the router going to be placed, where
the switches will be used, where we will put the hubs, how many machines is
each switch going to handle, what server is going to be placed where, and many
such concerns are to be taken care of by the physical layer. The variosu kinds of
netopologies that we decide to use may be ring, bus, star or a hybrid of these
topologies depending on our requirements.
This layer provides reliable transmission of a packet by using the services of the physical
layer which transmits bits over the medium in an unreliable fashion. This layer is
concerned with :
1. Framing : Breaking input data into frames (typically a few hundred bytes) and
caring about the frame boundaries and the size of each frame.
2. Acknowledgment : Sent by the receiving end to inform the source that the frame
was received without any error.
3. Sequence Numbering : To acknowledge which frame was received.
4. Error Detection : The frames may be damaged, lost or duplicated leading to
errors.The error control is on link to link basis.
5. Retransmission : The packet is retransmitted if the source fails to receive
acknowledgment.
6. Flow Control : Necessary for a fast transmitter to keep pace with a slow receiver.
Network Layer
• Static : Routes are based on static tables that are "wired into" the network and are
rarely changed.
• Dynamic : All packets of one application can follow different routes depending
upon the topology of the network, the shortest path and the current network load.
• Semi-Dynamic : A route is chosen at the start of each conversation and then all
the packets of the application follow the same route.
Routing
Congestion Control: A router can be connected to 4-5 networks. If all the networks send
packet at the same time with maximum rate possible then the router may not be able to
handle all the packets and may drop some/all packets. In this context the dropping of the
packets should be minimized and the source whose packet was dropped should be
informed. The control of such congestion is also a function of the network layer.
LECTURE NOTE 4
Internetworking: Internetworks are multiple networks that are connected in such a way
that they act as one large network, connecting multiple office or department networks.
Internetworks are connected by networking hardware such as routers, switches, and
bridges.Internetworking is a solution born of three networking problems: isolated LANs,
duplication of resources, and the lack of a centralized network management system. With
connected LANs, companies no longer have to duplicate programs or resources on each
network. This in turn gives way to managing the network from one central location
instead of trying to manage each separate LAN. We should be able to transmit any packet
from one network to any other network even if they follow different protocols or use
different addressing modes.
Inter-Networking
Network Layer does not guarantee that the packet will reach its intended destination.
There are no reliability guarantees.
Transport Layer
Fragmentation Reassembly
• Types of service : The transport layer also decides the type of service that should
be provided to the session layer. The service may be perfectly reliable, or may be
reliable within certain tolerances or may not be reliable at all. The message may
or may not be received in the order in which it was sent. The decision regarding
the type of service to be provided is taken at the time when the connection is
established.
• Error Control : If reliable service is provided then error detection and error
recovery operations are also performed. It provides error control mechanism on
end to end basis.
• Flow Control : A fast host cannot keep pace with a slow one. Hence, this is a
mechanism to regulate the flow of information.
• Connection Establishment / Release : The transport layer also establishes and
releases the connection across the network. This requires some sort of naming
mechanism so that a process on one machine can indicate with whom it wants to
communicate
LECTURE NOTE 5
PHYSICAL TOPOLOGY
Session Layer
It deals with the concept of Sessions i.e. when a user logins to a remote server he should
be authenticated before getting access to the files and application programs. Another job
of session layer is to establish and maintain sessions. If during the transfer of data
between two machines the session breaks down, it is the session layer which re-
establishes the connection. It also ensures that the data transfer starts from where it
breaks keeping it transparent to the end user. e.g. In case of a session with a database
server, this layer introduces check points at various places so that in case the connectoin
is broken and reestablished, the transition running on the database is not lost even if the
user has not committed. This activity is called Synchronization. Another function of this
layer is Dialogue Control which determines whose turn is it to speak in a session. It is
useful in video conferencing.
LECTURE NOTE 6
Presentation Layer
This layer is concerned with the syntax and semantics of the information transmitted. In
order to make it possible for computers with different data representations to
communicate data structures to be exchanged can be defined in abstract way alongwith
standard encoding. It also manages these abstract data structres and allows higher level of
data structres to be defined an exchange. It encodes the data in standard agreed
way(network format). Suppose there are two machines A and B one follows 'Big Endian'
and other 'Little Endian' for data representation. This layer ensures that the data
transmitted by one gets converted in the form compatibale to othe machine. This layer is
concerned with the syntax and semantics of the information transmitted.In order to make
it possible for computers with different data representations to communicate data
structures to be exchanged canbe defined in abstract way alongwith standard encoding. It
also manages these abstract data structres and allows higher level of data structres to be
defined an exchange. Other functions include compression, encryption etc.
Application Layer
The seventh layer contains the application protocols with which the user gains access to
the network. The choice of which specific protocols and their associated functions are to
be used at the application level is up to the individual user. Thus the boundary between
the presentation layer and the application layer represents a separation of the protocols
imposed by the network designers from those being selected and implemented by the
network users.For example commonly used protocols are HTTP(for web browsing),
FTP(for file transfer) etc.
Network Layers as in Practice
In most of the networks today, we do not follow the OSI model of seven layers. What is
actually implemented is as follows. The functionality of Application layer and
Presentation layer is merged into one and is called as the Application Layer.
Functionalities of Session Layer is not implemented in most networks today. Also, the
Data Link layer is split theoretically into MAC (Medium Access Control) Layer and
LLC (Link Layer Control). But again in practice, the LLC layer is not implemented by
most networks. So as of today, the network architecture is of 5 layers only.
Types of Medium
1. Guided Media : Guided media means that signals is guided by the prescence of
physical media i.e. signals are under control and remains in the physical wire. For
eg. copper wire.
2. Unguided Media : Unguided Media means that there is no physical path for the
signal to propogate. Unguided media are essentially electro-magnetic waves.
There is no control on flow of signal. For eg. radio waves.
Communication Links
In a nework nodes are connected through links. The communication through links can be
classified as
1. Simplex : Communication can take place only in one direction. eg. T.V
broadcasting.
2. Half-duplex : Communication can take place in one direction at a time. Suppose
node A and B are connected then half-duplex communication means that at a time
data can flow from A to B or from B to A but not simultaneously. eg. two persons
talking to each other such that when speaks the other listens and vice versa.
3. Full-duplex : Communication can take place simultaneously in both directions.
eg. A discussion in a group without discipline.
1. Point to Point : In this communication only two nodes are connected to each
other. When a node sends a packet then it can be recieved only by the node on the
other side and none else.
2. Multipoint : It is a kind of sharing communication, in which signal can be
recieved by all nodes. This is also called broadcast.
Generally two kind of problems are associated in transmission of signals.
Bandwidth
Bandwidth simply means how many bits can be transmitted per second in the
communication channel. In technical terms it indicates the width of frequency spectrum.
Transmission Media
1. Copper
o Coaxial Cable
o Twisted Pair
2. Optical Fiber
2.Twisted Pair: A Twisted pair consists of two insulated copper wires, typically 1mm
thick. The wires are twisted togather in a helical form the purpose of twisting is to reduce
cross talk interference between several pairs. Twisted Pair is much cheaper then coaxial
cable but it is susceptible to noise and electromagnetic interference and attenuation is
large.
The most common application of twisted pair is the telephone system. Nearly all
telephones are connected to the telephone company office by a twisted pair.
Twisted pair can run several kilometers without amplification, but for longer
distances repeaters are needed. Twisted pairs can be used for both analog and
digital transmission. The bandwidth depends on the thickness of wire and the
distance travelled. Twisted pairs are generally limited in distance, bandwidth and
data rate.
2. Optical Fiber: In optical fiber light is used to send data. In general terms
prescence of light is taken as bit 1 and its absence as bit 0. Optical fiber consists
of inner core of either glass or plastic. Core is surrounded by cladding of the same
material but of different refrective index. This cladding is surrounded by a plastic
jacket which prevents optical fiber from electromagnetic interferrence and harshy
environments. It uses the principle of total internal reflection to transfer data over
optical fibers. Optical fiber is much better in bandwidth as compared to copper
wire, since there is hardly any attenuation or electromagnetic interference in
optical wires. Hence there is less requirement to improve quality of signal, in long
distance transmission. Disadvantage of optical fiber is that end points are fairly
expensive. (eg. switches)
Differences between different kinds of optical fibers:
1. Depending on material
Made of glass
Made of plastic.
2. Depending on radius
Thin optical fiber
Thick optical fiber
3. Depending on light source
LED (for low bandwidth)
Injection lased diode (for high bandwidth)
LECTURE NOTE 9
Wireless Transmission
1. Radio: Radio is a general term that is used for any kind of frequency. But higher
frequencies are usually termed as microwave and the lower frequency band comes
under radio frequency. There are many application of radio. For eg. cordless
keyboard, wireless LAN, wireless ethernet. but it is limited in range to only a few
hundred meters. Depending on frequency radio offers different bandwidths.
2. Terrestrial microwave: In terrestrial microwave two antennas are used for
communication. A focused beam emerges from an antenna and is recieved by the
other antenna, provided that antennas should be facing each other with no
obstacle in between. For this reason antennas are situated on high towers. Due to
curvature of earth terristial microwave can be used for long distance
communication with high bandwidth. Telecom department is also using this for
long distance communication. An advantage of wireless communication is that it
is not required to lay down wires in the city hence no permissions are required.
3. Satellite communication: Satellite acts as a switch in sky. On earth VSAT(Very
Small Aperture Terminal) are used to transmit and recieve data from satellite.
Generally one station on earth transmitts signal to satellite and it is recieved by
many stations on earth. Satellite communication is generally used in those places
where it is very difficult to obtain line of sight i.e. in highly irregular terristial
regions. In terms of noise wireless media is not as good as the wired media. There
are frequency band in wireless communication and two stations should not be
allowed to transmit simultaneously in a frequency band. The most promising
advantage of satellite is broadcasting. If satellites are used for point to point
communication then they are expensive as compared to wired media.
LECTURE NOTE 10
Data Encoding
Digital data to analog signals
Encoding Techniques
• Non return to zero(NRZ) NRZ codes share the property that voltage level is
constant during a bit interval. High level voltage = bit 1 and Low level voltage =
bit 0. A problem arises when there is a long sequence of 0s or 1s and the volatage
level is maintained at the same value for a long time. This creates a problem on
the recieving end because now, the clock synchronization is lost due to lack of
any transitions and hence, it is difficult to determine the exact number of 0s or 1s
in this sequence.
NRZ-I has an advantage over NRZ-L. Consider the situation when two data wires
are wrongly connected in each other's place.In NRZ-L all bit sequences will get
reversed (B'coz voltage levels get swapped).Whereas in NAZ-I since bits are
recognized by transition the bits will be correctly interpreted. A disadvantage in
NRZ codes is that a string of 0's or 1's will prevent synchronization of transmitter
clock with receiver clock and a separate clock line need to be provided.
one leading 0
no more than two trailing 0s
Thus it is ensured that we can never have more than three consecutive 0s.
Now these 5-bit codes are transmitted using NRZI coding thus problem of
consecutive 1s is solved.
Of the remaining 16 codes, 7 are invalid and others are used to send some
control information like line idle(11111), line dead(00000), Halt(00100)
etc.
There are other variants for this scheme viz. 5B/6B, 8B/10B etc. These
have self suggesting names.
The process is called digitization. Sampling frequency must be at least twice that of
highest frequency present in the the signal so that it may be fairly regenerated.
Quantization - Max. and Min values of amplitude in the sample are noted. Depending on
number of bits (say n) we use we divide the interval (min,max) into 2(^n) number of
levels. The amplitude is then approximated to the nearest level by a 'n' bit integer. The
digital signal thus consists of blocks of n bits.On reception the process is reversed to
produce analog signal. But a lot of data can be lost if fewer bits are used or sampling
frequency not so high.
• Pulse code modulation(PCM): Here intervals are equally spaced. 8 bit PCB uses
256 different levels of amplitude. In non-linear encoding levels may be unequally
spaced.
• Delta Modulation(DM): Since successive samples do not differ very much we
send the differences between previous and present sample. It requires fewer bits
than in PCM.
For two devices linked by a transmission medium to exchange data ,a high degree of co-
operation is required. Typically data is transmitted one bit at a time. The timing (rate,
duration,spacing) of these bits must be same for transmitter and receiver. There are two
options for transmission of bits.
Transmission Techniques:
Bit Stuffing: Suppose our flag bits are 01111110 (six 1's). So the transmitter will
always insert an extra 0 bit after each occurrence of five 1's (except for flags).
After detecting a starting flag the receiver monitors the bit stream . If pattern of
five 1's appear, the sixth is examined and if it is 0 it isdeleted else if it is 1 and
next is 0 the combination is accepted as a flag. Similarly byte stuffing is used for
byte oriented transmission.Here we use an escape sequence to prefix a byte
similar to flag and 2 escape sequences if byte is itself a escape sequence.
LECTURE NOTE 13:
Multiplexing
When two communicating nodes are connected through a media, it generally happens
that bandwidth of media is several times greater than that of the communicating nodes.
Transfer of a single signal at a time is both slow and expensive. The whole capacity of the
link is not being utilized in this case. This link can be further exploited by sending several
signals combined into one. This combining of signals into one is called multiplexing.
1. Synchronous TDM: Time slots are preassigned and are fixed. Each
source is given it's time slot at every turn due to it. This turn may be once
per cycle, or several turns per cycle ,if it has a high data transfer rate, or
may be once in a no. of cycles if it is slow. This slot is given even if the
source is not ready with data. So this slot is transmitted empty.
Multiplexing
When two communicating nodes are connected through a media, it generally happens
that bandwidth of media is several times greater than that of the communicating nodes.
Transfer of a single signal at a time is both slow and expensive. The whole capacity of the
link is not being utilized in this case. This link can be further exploited by sending several
signals combined into one. This combining of signals into one is called multiplexing.
2. Asynchronous TDM: In this method, slots are not fixed. They are allotted
dynamically depending on speed of sources, and whether they are ready
for transmission.
LECTURE NOTE 14
Network Topologies
A network topology is the basic design of a computer network. It is very much like a map
of a road. It details how key network components such as nodes and links are
interconnected. A network's topology is comparable to the blueprints of a new home in
which components such as the electrical system, heating and air conditioning system, and
plumbing are integrated into the overall design. Taken from the Greek work "Topos"
meaning "Place," Topology, in relation to networking, describes the configuration of the
network; including the location of the workstations and wiring connections. Basically it
provides a definition of the components of a Local Area Network (LAN). A topology,
which is a pattern of interconnections among nodes, influences a network's cost and
performance. There are three primary types of network topologies which refer to the
physical and logical layout of the Network cabling. They are:
1. Star Topology: All devices connected with a Star setup communicate through a
central Hub by cable segments. Signals are transmitted and received through the
Hub. It is the simplest and the oldest and all the telephone switches are based on
this. In a star topology, each network device has a home run of cabling back to a
network hub, giving each device a separate connection to the network. So, there
can be multiple connections in parallel.
Advantages
The purpose of the terminators at either end of the network is to stop the signal
being reflected back.
Advantages
Disadvantages
Advantages
o Broadcasting and multicasting is simple since you just need to send out
one message
o Less expensive since less cable footage is required
o It is guaranteed that each host will be able to transmit within a finite time
interval
o Very orderly network where every device has access to the token and the
opportunity to transmit
o Performs better than a star network under heavy network load
Disadvantages
o Failure of one node brings the whole network down
o Error detection and network administration becomes difficult
o Moves, adds and changes of devices can effect the network
o It is slower than star topology under normal load
Generally, a BUS architecture is preferred over the other topologies - ofcourse, this is a
very subjective opinion and the final design depends on the requirements of the network
more than anything else. Lately, most networks are shifting towards the STAR topology.
Ideally we would like to design networks, which physically resemble the STAR topology,
but behave like BUS or RING topology.
LECTURE NOTE 15
Data Link Layer
Data link layer can be characterized by two types of layers:
Aloha Protocols
History
The Aloha protocol was designed as part of a project at the University of Hawaii. It
provided data transmission between computers on several of the Hawaiian Islands using
radio transmissions.
• Communications was typically between remote stations and a central sited named
Menehune or vice versa.
• All message to the Menehune were sent using the same frequency.
• When it received a message intact, the Menehune would broadcast an ack on a
distinct outgoing frequency.
• The outgoing frequency was also used for messages from the central site to
remote computers.
• All stations listened for message on this second frequency.
Pure Aloha
Pure Aloha is an unslotted, fully-decentralized protocol. It is extremely simple and trivial
to implement. The ground rule is - "when you want to talk, just talk!". So, a node which
wants to transmits, will go ahead and send the packet on its broadcast channel, with no
consideration whatsoever as to anybody else is transmitting or not.
One serious drawback here is that, you dont know whether what you are sending has been
received properly or not (so as to say, "whether you've been heard and understood?"). To
resolve this, in Pure Aloha, when one node finishes speaking, it expects an
acknowledgement in a finite amount of time - otherwise it simply retransmits the data.
This scheme works well in small networks where the load is not high. But in large, load
intensive networks where many nodes may want to transmit at the same time, this scheme
fails miserably. This led to the development of Slotted Aloha.
Slotted Aloha
This is quite similar to Pure Aloha, differing only in the way transmissions take place.
Instead of transmitting right at demand time, the sender waits for some time. This delay is
specified as follows - the timeline is divided into equal slots and then it is required that
transmission should take place only at slot boundaries. To be more precise, the slotted-
Aloha makes the following assumptions:
In both slotted and pure ALOHA, a node's decision to transmit is made independently of
the activity of the other nodes attached to the broadcast channel. In particular, a node
neither pays attention to whether another node happens to be transmitting when it begins
to transmit, nor stops transmitting if another node begins to interfere with its
transmission. As humans, we have human protocols that allow allows us to not only
behave with more civility, but also to decrease the amount of time spent "colliding" with
each other in conversation and consequently increasing the amount of data we exchange
in our conversations. Specifically, there are two important rules for polite human
conversation:
1. Listen before speaking: If someone else is speaking, wait until they are done. In
the networking world, this is termed carrier sensing - a node listens to the channel
before transmitting. If a frame from another node is currently being transmitted
into the channel, a node then waits ("backs off") a random amount of time and
then again senses the channel. If the channel is sensed to be idle, the node then
begins frame transmission. Otherwise, the node waits another random amount of
time and repeats this process.
2. If someone else begins talking at the same time, stop talking. In the
networking world, this is termed collision detection - a transmitting node listens
to the channel while it is transmitting. If it detects that another node is
transmitting an interfering frame, it stops transmitting and uses some protocol to
determine when it should next attempt to transmit.
It is evident that the end-to-end channel propagation delay of a broadcast channel - the
time it takes for a signal to propagate from one of the the channel to another - will play a
crucial role in determining its performance. The longer this propagation delay, the larger
the chance that a carrier-sensing node is not yet able to sense a transmission that has
already begun at another node in the network.
So, we need an improvement over CSMA - this led to the development of CSMA/CD.
In the case of wireless network it is possible that A is sending a message to B, but C is out
of its range and hence while "listening" on the network it will find the network to be free
and might try to send packets to B at the same time as A. So, there will be a collision at
B. The problem can be looked upon as if A and C are hidden from each other. Hence it is
called the "hidden node problem".
Consider the figure above.Suppose A wants to send a packet to B. Then it will first send a
small packet to B called "Request to Send" (RTS). In response, B sends a small packet
to A called "Clear to Send" (CTS). Only after A receives a CTS, it transmits the actual
data. Now, any of the nodes which can hear either CTS or RTS assume the network to be
busy. Hence even if some other node which is out of range of both A and B sends an RTS
to C (which can hear at least one of the RTS or CTS between A and B), C would not send
a CTS to it and hence the communication would not be established between C and D.
One issue that needs to be addressed is how long the rest of the nodes should wait before
they can transmit data over the network. The answer is that the RTS and CTS would carry
some information about the size of the data that B intends to transfer. So, they can
calculate time that would be required for the transmission to be over and assume the
network to be free after that.Another interesting issue is what a node should do if it hears
RTS but not a corresponding CTS. One possibility is that it assumes the recipient node
has not responded and hence no transmission is going on, but there is a catch in this. It is
possible that the node hearing RTS is just on the boundary of the node sending CTS.
Hence, it does hear CTS but the signal is so deteriorated that it fails to recognize it as a
CTS. Hence to be on the safer side, a node will not start transmission if it hears either of
an RTS or a CTS.
The assumption made in this whole discussion is that if a node X can send packets to a
node Y, it can also receive a packet from Y, which is a fair enough assumption given the
fact that we are talking of a local network where standard instruments would be used. If
that is not the case additional complexities would get introduced in the system.
The problem of range is there in wired networks as well in the form of deterioration of
signals. Normally to counter this, we use repeaters, which can regenerate the original
signal from a deteriorated one. But does that mean that we can build as long networks as
we want with repeaters. The answer, unfortunately, is NO! The reason is the beyond a
certain length CSMA/CD will break down.
Let us try to parametrize the above problem. Suppose "t" is the time taken for the node A
to transmit the packet on the cable and "T" is the time , the packet takes to reach from A
to B. Suppose transmission at A starts at time t0. In the worst case the collision takes
place just when the first packet is to reach B. Say it is at t0+T-e (e being very small).
Then the collision information will take T-e time to propagate back to A. So, at t0+2(T-e)
A should still be transmitting. Hence, for the correct detection of collision (ignoring e)
t > 2T
t increases with the number of bits to be transferred and decreases with the rate of transfer
(bits per second). T increases with the distance between the nodes and decreases with the
speed of the signal (usually 2/3c). We need to either keep t large enough or T as small.
We do not want to live with lower rate of bit transfer and hence slow networks. We can
not do anything about the speed of the signal. So what we can rely on is the minimum
size of the packet and the distance between the two nodes. Therefore, we fix some
minimum size of the packet and if the size is smaller than that, we put in some extra bits
to make it reach the minimum size. Accordingly we fix the maximum distance between
the nodes. Here too, there is a tradeoff to be made. We do not want the minimum size of
the packets to be too large since that wastes lots of resources on cable. At the same time
we do not want the distance between the nodes to be too small. Typical minimum packet
size is 64 bytes and the corresponding distance is 2-5 kilometers.
LECTURE NOTE 18
Collision Free Protocols
Although collisions do not occur with CSMA/CD once a station has unambigously seized
the channel, they can still occur during the contention period. These collisions adversely
affect the efficiency of transmission. Hence some protocols have been developed which
are contention free.
Bit-Map Method
In this method, there N slots. If node 0 has a frame to send, it transmit a 1 bit during the
first slot. No other node is allowed to transmit during this period. Next node 1 gets a
chance to transmit 1 bit if it has something to send, regardless of what node 0 had
transmitted. This is done for all the nodes. In general node j may declare the fact that it
has a frsme to send by inserting a 1 into slot j. Hence after all nodes have passed, each
node has complete knowledge of who wants to send a frame. Now they begin
transmitting in numerical order. Since everyone knows who is transmitting and when,
there could never be any collision.
The basic problem with this protocol is its inefficiency during low load. If a node has to
transmit and no other node needs to do so, even then it has to wait for the bitmap to
finish. Hence the bitmap will be repeated over and over again if very few nodes want to
send wasting valuable bandwidth.
Binary Countdown
In this protocol, a node which wants to signal that it has a frame to send does so by
writing its address into the header as a binary number. The arbitration is such that as soon
as a node sees that a higher bit position that is 0 in its address has been overwritten with a
1, it gives up. The final result is the address of the node which is allowed to send. After
the node has transmitted the whole process is repeated all over again. Given below is an
example situation.
Nodes Addresses
A 0010
B 0101
C 1010
D 1001
----
1010
Node C having higher priority gets to transmit. The problem with this protocol is that the
nodes with higher address always wins. Hence this creates a priority which is highly
unfair and hence undesirable.
Limited Contention Protocols
Both the type of protocols described above - Contention based and Contention - free has
their own problems. Under conditions of light load, contention is preferable due to its low
delay. As the load increases, contention becomes increasingly less attractive, because the
overload associated with channel arbitration becomes greater. Just the reverse is true for
contention - free protocols. At low load, they have high delay, but as the load increases ,
the channel efficiency improves rather than getting worse as it does for contention
protocols.
Obviously it would be better if one could combine the best properties of the contention
and contention - free protocols, that is, protocol which used contention at low loads to
provide low delay, but used a cotention-free technique at high load to provide good
channel efficiency. Such protocols do exist and are called Limited contention protocols.
The following is the method of adaptive tree protocol. Initially all the nodes are allowed
to try to aquire the channel. If it is able to aquire the channel, it sends its frame. If there is
collision then the nodes are divided into two equal groups and only one of these groups
compete for slot 1. If one of its member aquires the channel then the next slot is reserved
for the other group. On the other hand, if there is a collision then that group is again
subdivided and the same process is followed. This can be better understood if the nodes
are though t of as
being organised in a binary tree as shown in the following figure.
Many improvements could be made to the algorithm. For example, consider the
case of nodes G and H being the only ones wanting to transmit. At slot 1 a
collision will be detected and so 2 will be tried and it will be found to be idle.
Hence it is pointless to probe 3 and one should directly go to 6,7.
LECTURE NOTE 19
IEEE 802.3 and Ethernet
• Very popular LAN standard.
• Ethernet and IEEE 802.3 are distinct standards but as they are very similar to one
another these words are used interchangeably.
• A standard for a 1-persistent CSMA/CD LAN.
• It covers the physical layer and MAC sublayer protocol.
10Base5 means it operates at 10 Mbps, uses baseband signaling and can support
segments of up to 500 meters. The 10Base5 cabling is popularly called the Thick
Ethernet. Vampire taps are used for their connections where a pin is carefully forced
halfway into the co-axial cable's core as shown in the figure below. The 10Base2 or Thin
Ethernet bends easily and is connected using standard BNC connectors to form T
junctions (shown in the figure below). In the 10Base-T scheme a different kind of wiring
pattern is followed in which all stations have a twisted-pair cable running to a central hub
(see below). The difference between the different physical connections is shown below:
(a) 10Base5 (b)10Base2 (c)10Base-T
All 802.3 baseband systems use Manchester encoding , which is a way for receivers to
unambiguously determine the start, end or middle of each bit without reference to an
external clock. There is a restriction on the minimum node spacing (segment length
between two nodes) in 10Base5 and 10Base2 and that is 2.5 meter and 0.5 meter
respectively. The reason is that if two nodes are closer than the specified limit then there
will be very high current which may cause trouble in detection of signal at the receiver
end. Connections from station to cable of 10Base5 (i.e. Thick Ethernet) are generally
made using vampire taps and to 10Base2 (i.e. Thin Ethernet) are made using industry
standard BNC connectors to form T junctions. To allow larger networks, multiple
segments can be connected by repeaters as shown. A repeater is a physical layer device. It
receives, amplifies and retransmits signals in either direction.
Note: To connect multiple segments, amplifier is not used because amplifier also
amplifies the noise in the signal, whereas repeater regenerates signal after removing the
noise.
• Preamble :Each frame starts with a preamble of 7 bytes, each byte containing the
bit pattern 10101010. Manchester encoding is employed here and this enables the
receiver's clock to synchronize with the sender's and initialise itself.
• Start of Frame Delimiter :This field containing a byte sequence 10101011
denotes the start of the frame itself.
• Dest. Address :The standard allows 2-byte and 6-byte addresses. Note that the 2-
byte addresses are always local addresses while the 6-byte ones can be local or
global.
LECTURE NOTE 20
• Preamble :The Preamble and Start of Frame Delimiter are merged into one in
Ethernet standard. However, the contents of the first 8 bytes remains the same in
both.
• Type :The length field of IEEE 802.3 is replaced by Type field, which denotes the
type of packet being sent viz. IP, ARP, RARP, etc. If the field indicates a value
less than 1500 bytes then it is length field of 802.3 else it is the type field of
Ethernet packet.
In case of collision the node transmitting backs off by a random number of slots , each
slot time being equal to transmission time of 512 bits (64 Byte- minimum size of a
packet) in the following fashion:
1st 0-1
2nd 0-3
3rd 0-7
| |
| |
10th 0-1023
---------------------------------------------
11th 0-1023
12th 0-1023
| |
16th 0-1023
In general after i collisions a random number between 0-2^i-1 is chosen , and that number
of slots is skipped. However, after 10 collisions have been reached the randomization
interval is frozen at maximum of 1023 slots. After 16 collisions the controller reports
failure back to the computer.
5-4-3 Rule
Each version of 802.3 has a maximum cable length per segment because long
propagation time leads to difficulty in collision detection. To compensate for this
the transmission time has to be increased which can be achieved by slowing down
the transmission rate or increasing the packet size, neither of which is desirable.
Hence to allow for large networks, multiple cables are connected via repeaters.
Between any two nodes on an Ethernet network, there can be at most five
segments, four repeaters and three populated segments (non-populated segments
are those which do not have any machine connected between the two repeaters).
This is known as the 5-4-3 Rule.
LECTURE NOTE 21
If a node transmits the token and nobody wants to send the data the token comes back to
the sender. If the first bit of the token reaches the sender before the transmission of the
last bit, then error situation araises. So to avoid this we should have:
propogation delay + transmission of n-bits (1-bit delay in each node ) > transmission
of the token time
A station may hold the token for the token-holding time. which is 10 ms unless the
installation sets a different value. If there is enough time left after the first frame has been
transmitted to send more frames, then these frames may be sent as well. After all pending
frames have been transmitted or the transmission frame would exceed the token-holding
time, the station regenerates the 3-byte token frame and puts it back on the ring.
Modes of Operation
1. Listen Mode: In this mode the node listens to the data and transmits the data to
the next node. In this mode there is a one-bit delay associated with the
transmission.
2. Transmit Mode: In this mode the node just discards the any data and puts the
data onto the network.
3. By-pass Mode: In this mode reached when the node is down. Any data is just
bypassed. There is no one-bit delay in this mode.
LECTURE NOTE 22
One problem with a ring network is that if the cable breaks somewhere, the ring dies.
This problem is elegantly addressed by using a ring concentrator. A Token Ring
concentrator simply changes the topology from a physical ring to a star wired ring. But
the network still remains a ring logically. Physically, each station is connected to the ring
concentrator (wire center) by a cable containing at least two twisted pairs, one for data to
the station and one for data from the station. The Token still circulates around the
network and is still controlled in the same manner, however, using a hub or a switch
greatly improves reliability because the hub can automatically bypass any ports that are
disconnected or have a cabling fault. This is done by having bypass relays inside the
concentrator that are energized by current from the stations. If the ring breaks or station
goes down, loss of the drive current will release the relay and bypass the station. The ring
can then continue operation with the bad segment bypassed.
1. The source itself removes the packet after one full round in the ring.
2. The destination removes it after accepting it: This has two potential problems.
Firstly, the solution won't work for broadcast or multicast, and secondly, there
would be no way to acknowledge the sender about the receipt of the packet.
3. Have a specialized node only to discard packets: This is a bad solution as the
specialized node would know that the packet has been received by the destination
only when it receives the packet the second time and by that time the packet may
have actually made about one and half (or almost two in the worst case) rounds in
the ring.
Thus the first solution is adopted with the source itself removing the packet from the ring
after a full one round. With this scheme, broadcasting and multicasting can be handled as
well as the destination can acknowledge the source about the receipt of the packet (or can
tell the source about some error).
Token Format
SD AC ED
J = Code Violation
K = Code Violation
T=Token
T = 0 for Token
T = 1 for Frame
When a station with a Frame to transmit detects a token which has a priority equal to or
less than the Frame to be transmitted, it may change the token to a start-of-frame
sequence and transmit the Frame
P = Priority
Priority Bits indicate tokens priority, and therefore, which stations are allowed to use it.
Station can transmit if its priority as at least as high as that of the token.
M = Monitor
The monitor bit is used to prevent a token whose priority is greater than 0 or any frame
from continuously circulating on the ring. If an active monitor detects a frame or a high
priority token with the monitor bit equal to 1, the frame or token is aborted. This bit shall
be transmitted as 0 in all frame and tokens. The active monitor inspects and modifies this
bit. All other stations shall repeat this bit as received.
R = Reserved bits
The reserved bits allow station with high priority Frames to request that the next token be
issued at the requested priority.
Ending Delimiter Format:
J K1JK11E
J = Code Violation
K = Code Violation
I = Intermediate Frame Bit
E = Error Detected Bit
LECTURE NOTE 23
Frame Format:
MSB (Most Significant Bit) is always transmitted first - as opposed to Ethernet
SD AC FC DA SA DATA CRC ED FS
J = Code Violation
K = Code Violation
T=Token
When a station with a Frame to transmit detects a token which has a priority equal to or
less than the Frame to be transmitted, it may change the token to a start-of-frame
sequence and transmit the Frame.
P = Priority
Bits Priority Bits indicate tokens priority, and therefore, which stations are allowed to use
it. Station can transmit if its priority as at least as high as that of the token.
M = Monitor
The monitor bit is used to prevent a token whose priority is greater than 0 or any frame
from continuously circulating on the ring. if an active monitor detects a frame or a high
priority token with the monitor bit equal to 1, the frame or token is aborted. This bit shall
be transmitted as 0 in all frame and tokens. The active monitor inspects and modifies this
bit. All other stations shall repeat this bit as received.
R = Reserved bits the reserved bits allow station with high priority Frames to request
that the next token be issued at the requested priority
alternatively
I/G (1 BIT) L/U (1 BIT) RING ADDRESS (14 BITS) NODE ADDRESS (32 BITS)
I/G (1 BIT) RING NUMBER T/B (1 BIT) GROUP ADDRESS (14 BITS)
Data Format:
No upper limit on amount of data as such, but it is limited by the token holding time.
Checksum:
The source computes and sets this value. Destination too calculates this value. If the two
are different, it indicates an error, otherwise the data may be correct.
Frame Status:
This arrangement provides an automatic acknowledgement for each frame. The A and C
bits are present twice in the Frame Status to increase reliability in as much as they are not
covered by the checksum.
J = Code Violation
K = Code Violation
I = Intermediate Frame Bit
If this bit is set to 1, it indicates that this packet is an intermediate part of a bigger packet,
the last packet would have this bit set to 0.
E = Error Detected Bit
This bit is set if any interface detects an error.
LECTURE NOTE 24
What is Framing?
Since the physical layer merely accepts and transmits a stream of bits without any regard
to meaning or structure, it is upto the data link layer to create and recognize frame
boundaries. This can be accomplished by attaching special bit patterns to the beginning
and end of the frame. If these bit patterns can accidentally occur in data, special care must
be taken to make sure these patterns are not incorrectly interpreted as frame delimiters.
The four framing methods that are widely used are
• Character count
• Starting and ending characters, with character stuffing
• Starting and ending flags, with bit stuffing
• Physical layer coding violations
Character Count
This method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count,it knows how many
characters follow, and hence where the end of the frame is. The disadvantage is that if the
count is garbled by a transmission error, the destination will lose synchronization and will
be unable to locate the start of the next frame. So, this method is rarely used.
Character stuffing
In the second method, each frame starts with the ASCII character sequence DLE STX
and ends with the sequence DLE ETX.(where DLE is Data Link Escape, STX is Start of
TeXt and ETX is End of TeXt.) This method overcomes the drawbacks of the character
count method. If the destination ever loses synchronization, it only has to look for DLE
STX and DLE ETX characters. If however, binary data is being transmitted then there
exists a possibility of the characters DLE STX and DLE ETX occurring in the data. Since
this can interfere with the framing, a technique called character stuffing is used. The
sender's data link layer inserts an ASCII DLE character just before the DLE character in
the data. The receiver's data link layer removes this DLE before this data is given to the
network layer. However character stuffing is closely associated with 8-bit characters and
this is a major hurdle in transmitting arbitrary sized characters.
Bit stuffing
The third method allows data frames to contain an arbitrary number of bits and allows
character codes with an arbitrary number of bits per character. At the start and end of
each frame is a flag byte consisting of the special bit pattern 01111110 . Whenever the
sender's data link layer encounters five consecutive 1s in the data, it automatically stuffs a
zero bit into the outgoing bit stream. This technique is called bit stuffing. When the
receiver sees five consecutive 1s in the incoming data stream, followed by a zero bit, it
automatically destuffs the 0 bit. The boundary between two frames can be determined by
locating the flag pattern.
Error Control
The bit stream transmitted by the physical layer is not guaranteed to be error free. The
data link layer is responsible for error detection and correction. The most common error
control method is to compute and append some form of a checksum to each outgoing
frame at the sender's data link layer and to recompute the checksum and verify it with the
received checksum at the receiver's side. If both of them match, then the frame is
correctly received; else it is erroneous. The checksums may be of two types:
# Error detecting : Receiver can only detect the error in the frame and inform the sender
about it. # Error detecting and correcting : The receiver can not only detect the error but
also correct it.
Examples of Error Detecting methods:
• Parity bit:
Simple example of error detection technique is parity bit. The parity bit is chosen
that the number of 1 bits in the code word is either even( for even parity) or odd
(for odd parity). For example when 10110101 is transmitted then for even parity
an 1 will be appended to the data and for odd parity a 0 will be appended. This
scheme can detect only single bits. So if two or more bits are changed then that
can not be detected.
• Longitudinal Redundancy Checksum:
Longitudinal Redundancy Checksum is an error detecting scheme which
overcomes the problem of two erroneous bits. In this conceptof parity bit is used
but with slightly more intelligence. With each byte we send one parity bit then
send one additional byte which have the parity corresponding to the each bit
position of the sent bytes. So the parity bit is set in both horizontal and vertical
direction. If one bit get flipped we can tell which row and column have error then
we find the intersection of the two and determine the erroneous bit. If 2 bits are in
error and they are in the different column and row then they can be detected. If the
error are in the same column then the row will differentiate and vice versa. Parity
can detect the only odd number of errors. If they are even and distributed in a
fashion that in all direction then LRC may not be able to find the error.
• Cyclic Redundancy Checksum (CRC):
We have an n-bit message. The sender adds a k-bit Frame Check Sequence (FCS)
to this message before sending. The resulting (n+k) bit message is divisible by
some (k+1) bit number. The receiver divides the message ((n+k)-bit) by the same
(k+1)-bit number and if there is no remainder, assumes that there was no error.
How do we choose this number?
For example, if k=12 then 1000000000000 (13-bit number) can be chosen, but
this is a pretty crappy choice. Because it will result in a zero remainder for all
(n+k) bit messages with the last 12 bits zero. Thus, any bits flipping beyond the
last 12 go undetected. If k=12, and we take 1110001000110 as the 13-bit number
(incidentally, in decimal representation this turns out to be 7238). This will be
unable to detect errors only if the corrupt message and original message have a
difference of a multiple of 7238. The probablilty of this is low, much lower than
the probability that anything beyond the last 12-bits flips. In practice, this number
is chosen after analyzing common network transmission errors and then selecting
a number which is likely to detect these common errors.
LECTURE NOTE 25
Flow Control
Consider a situation in which the sender transmits frames faster than the receiver can
accept them. If the sender keeps pumping out frames at high rate, at some point the
receiver will be completely swamped and will start losing some frames. This problem
may be solved by introducing flow control. Most flow control protocols contain a
feedback mechanism to inform the sender when it should transmit the next frame.
• Stop and Wait Protocol: This is the simplest file control protocol in which the
sender transmits a frame and then waits for an acknowledgement, either positive
or negative, from the receiver before proceeding. If a positive acknowledgement
is received, the sender transmits the next packet; else it retransmits the same
frame. However, this protocol has one major flaw in it. If a packet or an
acknowledgement is completely destroyed in transit due to a noise burst, a
deadlock will occur because the sender cannot proceed until it receives an
acknowledgement. This problem may be solved using timers on the sender's side.
When the frame is transmitted, the timer is set. If there is no response from the
receiver within a certain time interval, the timer goes off and the frame may be
retransmitted.
• Sliding Window Protocols: Inspite of the use of timers, the stop and wait
protocol still suffers from a few drawbacks. Firstly, if the receiver had the
capacity to accept more than one frame, its resources are being underutilized.
Secondly, if the receiver was busy and did not wish to receive any more packets,
it may delay the acknowledgement. However, the timer on the sender's side may
go off and cause an unnecessary retransmission. These drawbacks are overcome
by the sliding window protocols.
In sliding window protocols the sender's data link layer maintains a 'sending
window' which consists of a set of sequence numbers corresponding to the frames
it is permitted to send. Similarly, the receiver maintains a 'receiving window'
corresponding to the set of frames it is permitted to accept. The window size is
dependent on the retransmission policy and it may differ in values for the
receiver's and the sender's window. The sequence numbers within the sender's
window represent the frames sent but as yet not acknowledged. Whenever a new
packet arrives from the network layer, the upper edge of the window is advanced
by one. When an acknowledgement arrives from the receiver the lower edge is
advanced by one. The receiver's window corresponds to the frames that the
receiver's data link layer may accept. When a frame with sequence number equal
to the lower edge of the window is received, it is passed to the network layer, an
acknowledgement is generated and the window is rotated by one. If however, a
frame falling outside the window is received, the receiver's data link layer has two
options. It may either discard this frame and all subsequent frames until the
desired frame is received or it may accept these frames and buffer them until the
appropriate frame is received and then pass the frames to the network layer in
sequence.
In this simple example, there is a 4-byte sliding window. Moving from left to
right, the window "slides" as bytes in the stream are sent and acknowledged.
Most sliding window protocols also employ ARQ ( Automatic Repeat reQuest )
mechanism. In ARQ, the sender waits for a positive acknowledgement before
proceeding to the next frame. If no acknowledgement is received within a certain
time interval it retransmits the frame. ARQ is of two types :
2. Selective Repeat:In this protocol rather than discard all the subsequent
frames following a damaged or lost frame, the receiver's data link layer
simply stores them in buffers. When the sender does not receive an
acknowledgement for the first frame it's timer goes off after a certain time
interval and it retransmits only the lost frame. Assuming error - free
transmission this time, the sender's data link layer will have a sequence of
a many correct frames which it can hand over to the network layer. Thus
there is less overhead in retransmission than in the case of Go Back n
protocol.
In case of selective repeat protocol the window size may be calculated as
follows. Assume that the size of both the sender's and the receiver's
window is w. So initially both of them contain the values 0 to (w-1).
Consider that sender's data link layer transmits all the w frames, the
receiver's data link layer receives them correctly and sends
acknowledgements for each of them. However, all the acknowledgemnets
are lost and the sender does not advance it's window. The receiver window
at this point contains the values w to (2w-1). To avoid overlap when the
sender's data link layer retransmits, we must have the sum of these two
windows less than sequence number space. Hence, we get the condition
In a token ring the source starts discarding all it's previously transmitted bits as soon as
they circumnavigate the ring and reach the source. Hence, it's not desirable that while a
token is being sent some bits of the token which have already been sent become available
at the incoming end of the source. This behavior though is desirable in case of data
packets which ought to be drained from the ring once they have gone around the ring. To
achieve the aforesaid behavior with respect to tokens, we would like the ring to hold at
least 24 bits at a time. How do we ensure this?
Each node in a ring introduces a 1 bit delay. So, one approach might be to set the
minimum limit on the number of nodes in a ring as 24. But, this is not a viable option.
The actual solution is as follows. We have one node in the ring designated as
"monitor". The monitor maintains a 24 bits buffer with help of which it introduces a 24
bit delay. The catch here is what if the clocks of nodes following the source are faster
than the source? In this case the 24 bit delay of the monitor would be less than the 24 bit
delay desired by the host. To avoid this situation the monitor maintains 3 extra bits to
compensate for the faster bits. The 3 extra bits suffice even if bits are 10 % faster. This
compensation is called Phase Jitter Compensation.
Each node or packet has a priority level. We don't concern ourselves with how this
priority is decided. The first 3 bits of the Access Control byte in the token are for priority
and the last 3 are for reservation.
P P P TM R R R
Initially the reservation bits are set to 000. When a node wants to transmit a priority n
frame, it must wait until it can capture a token whose priority is less than or equal to n.
Furthermore, when a data frame goes by, a station can try to reserve the next token by
writing the priority of the frame it wants to send into the frame's Reservation bits.
However, if a higher priority has already been reserved there, the station cannot make a
reservation. When the current frame is finished, the next token is generated at the priority
that has been reserved.
A slight problem with the above reservation procedure is that the reservation priority
keeps on increasing. To solve this problem, the station raising the priority remembers the
reservation priority that it replaces and when it is done it reduces the priority to the
previous priority.
Note that in a token ring, low priority frames may starve.
Ring Maintenance
Each token ring has a monitor that oversees the ring. Among the monitor's
responsibilities are seeing that the token is not lost, taking action when the ring breaks,
cleaning the ring when garbled frames appear and watching out for orphan frames. An
orphan frame occurs when a station transmits a short frame in it's entirety onto a long
ring and then crashes or is powered down before the frame can be removed. If nothing is
done, the frame circulates indefinitely.
• Detection of orphan frames: The monitor detects orphan frames by setting the
monitor bit in the Access Control byte whenever it passes through. If an incoming
frame has this bit set, something is wrong since the same frame has passed the
monitor twice. Evidently it was not removed by the source, so the monitor drains
it.
• Lost Tokens: The monitor has a timer that is set to the longest possible tokenless
interval : when each node transmits for the full token holding time. If this timer
goes off, the monitor drains the ring and issues a fresh token.
• Garbled frames: The monitor can detect such frames by their invalid format or
checksum, drain the ring and issue a fresh token.
Control
Name Meaning
field
Duplicate Test if two stations have the same
00000000
address test address
00000010 Beacon Used to locate breaks in the ring
00000011 Claim token Attempt to become monitor
00000100 Purge Reinitialize the ring
Active monitor
00000101 Issued periodically by the monitor
present
Standby Announces the presence of potential
00000110
monitor present monitors
The monitor periodically issues a message "Active Monitor Present" informing all nodes
of its presence. When this message is not received for a specific time interval, the nodes
detect a monitor failure. Each node that believes it can function as a monitor broadcasts a
"Standby Monitor Present" message at regular intervals, indicating that it is ready to take
on the monitor's job. Any node that detects failure of a monitor issues a "Claim" token.
There are 3 possible outcomes :
1. If the issuing node gets back its own claim token, then it becomes the monitor.
2. If a packet different from a claim token is received, apparently a wrong guess of
monitor failure was made. In this case on receipt of our own claim token, we
discard it. Note that our claim token may have been removed by some other node
which has detected this error.
3. If some other node has also issued a claim token, then the node with the larger
address becomes the monitor.
The problem with the token ring system is that large rings cause large delays. It must be
made possible for multiple packets to be in the ring simultaneously. The following ring
networks resolve this problem to some extent :-
Slotted Ring :
In this system, the ring is slotted into a number of fixed size frames which are
continuously moving around the ring. This makes it necessary that there be enough
number of nodes (large ring size) to ensure that all the bits can stay on the ring at the
same time. The frame header contains information as to whether the slots are empty or
full. The usual disadvantages of overhead/wastage associated with fixed size frames are
present.
Register Insertion Rings :
This is an improvement over slotted ring architecture. The network interface consists of
two registers : a shift register and an output buffer. At startup, the input pointer points to
the rightmost bit position in the input shift register .When a bit arrives it is in the
rightmost empty position (the one indicated by the input pointer). After the node has
detected that the frame is not addressed to it, the bits are transmitted one at time (by
shifting). As new bits come in, they are inserted at the position indicated by the pointer
and then the contents are shifted. Thus the pointer is not moved. Once the shift register
has pushed out the last bit of a frame, it checks to see if it has an output frame waiting. In
case yes, then it checks that if the number of empty slots in the shift register is at least
equal to the number of bits in the output frame. After this the output connection is
switched to this second register and after the register has emptied its contents, the output
line is switched back to the shift register. Thus, no single node can hog the bandwidth. In
a loaded system, a node can transmit a k-bit frame only if it has saved up a k-bits of inter
frame gaps.
Two major disadvantages of this topology are complicated hardware and difficulty in the
detection of start/end of packets.
Contention Ring
Frame Structure
Ring Maintenance:
Mechanism:
When the first node on the token bus comes up, it sends a Claim_token packet to
initialize the ring. If more than one station send this packet at the same time, there is a
collision. Collision is resolved by a contention mechanism, in which the contending
nodes send random data for 1, 2, 3 and 4 units of time depending on the first two bits of
their address. The node sending data for the longest time wins. If two nodes have the
same first two bits in their addresses, then contention is done again based on the next two
bits of their address and so on.
After the ring is set up, new nodes which are powered up may wish to join the ring. For
this a node sends Solicit_successor_1 packets from time to time, inviting bids from new
nodes to join the ring. This packet contains the address of the current node and its current
successor, and asks for nodes in between these two addresses to reply. If more than one
nodes respond, there will be collision. The node then sends a Resolve_contention packet,
and the contention is resolved using a similar mechanism as described previously. Thus at
a time only one node gets to enter the ring. The last node in the ring will send a
Solicit_successor_2 packet containing the addresses of it and its successor. This packet
asks nodes not having addresses in between these two addresses to respond.
A question arises that how frequently should a node send a Solicit_successor packet? If it
is sent too frequently, then overhead will be too high. Again if it is sent too rarely, nodes
will have to wait for a long time before joining the ring. If the channel is not busy, a node
will send a Solicit_successor packet after a fixed number of token rotations. This number
can be configured by the network administrator. However if there is heavy traffic in the
network, then a node would defer the sending of bids for successors to join in.
There may be problems in the logical ring due to sudden failure of a node. What happens
when a node goes down along with the token? After passing the token, a node, say node
A, listens to the channel to see if its successor either transmits the token or passes a
frame. If neither happens, it resends a token. Still if nothing happens, A sends a
Who_follows packet, containing the address of the down node. The successor of the
down node, say node C, will now respond with a Set_successor packet, containing its
own address. This causes A to set its successor node to C, and the logical ring is restored.
However, if two successive nodes go down suddenly, the ring will be dead and will have
to be built afresh, starting from a Claim_token packet.
When a node wants to shutdown normally, it sends a Set_successor packet to its
predecessor, naming its own successor. The ring then continues unbroken, and the node
goes out of the ring.
The various control frames used for ring maintenance are shown below:
Priority Scheme:
0 is the lowest priority level and 6 the highest. The following times are defined by the
token bus:
• THT: Token Holding Time. A node holding the token can send priority 6 data for
a maximum of this amount of time.
• TRT_4: Token Rotation Time for class 4 data. This is the maximum time a token
can take to circulate and still allow transmission of class 4 data.
• TRT_2 and TRT_0: Similar to TRT_4.
• It transmits priority 6 data for at most THT time, or as long as it has data.
• Now if the time for the token to come back to it is less than TRT_4, it will
transmit priority 4 data, and for the amount of time allowed by TRT_4. Therefore
the maximum time for which it can send priority 4 data is= Actual TRT - THT -
TRT_4
• Similarly for priority 2 and priority 0 data.
LECTURE NOTE 26
Network Layer
What is Network Layer?
The network layer is concerned with getting packets from the source all the way to the
destination. The packets may require to make many hops at the intermediate routers while
reaching the destination. This is the lowest layer that deals with end to end transmission.
In order to achieve its goals, the network layer must know about the topology of the
communication network. It must also take care to choose routes to avoid overloading of
some of the communication lines while leaving others idle. The network layer-transport
layer interface frequently is the interface between the carrier and the customer, that is the
boundary of the subnet. The functions of this layer include :
1. Routing - The process of transferring packets received from the Data Link Layer
of the source network to the Data Link Layer of the correct destination network is
called routing. Involves decision making at each intermediate node on where to
send the packet next so that it eventually reaches its destination. The node which
makes this choice is called a router. For routing we require some mode of
addressing which is recognized by the Network Layer. This addressing is different
from the MAC layer addressing.
2. Inter-networking - The network layer is the same across all physical networks
(such as Token-Ring and Ethernet). Thus, if two physically different networks
have to communicate, the packets that arrive at the Data Link Layer of the node
which connects these two physically different networks, would be stripped of
their headers and passed to the Network Layer. The network layer would then
pass this data to the Data Link Layer of the other physical network..
3. Congestion Control - If the incoming rate of the packets arriving at any router is
more than the outgoing rate, then congestion is said to occur. Congestion may be
caused by many factors. If suddenly, packets begin arriving on many input lines
and all need the same output line, then a queue will build up. If there is
insufficient memory to hold all of them, packets will be lost. But even if routers
have an infinite amount of memory, congestion gets worse, because by the time
packets reach to the front of the queue, they have already timed out (repeatedly),
and duplicates have been sent. All these packets are dutifully forwarded to the
next router, increasing the load all the way to the destination. Another reason for
congestion are slow processors. If the router's CPUs are slow at performing the
bookkeeping tasks required of them, queues can build up, even though there is
excess line capacity. Similarly, low-bandwidth lines can also cause congestion.
Addressing Scheme
IP addresses are of 4 bytes and consist of :
i) The network address, followed by
ii) The host address
The first part identifies a network on which the host resides and the second part identifies
the particular host on the given network. Some nodes which have more than one interface
to a network must be assigned separate internet addresses for each interface. This multi-
layer addressing makes it easier to find and deliver data to the destination. A fixed size
for each of these would lead to wastage or under-usage that is either there will be too
many network addresses and few hosts in each (which causes problems for routers who
route based on the network address) or there will be very few network addresses and lots
of hosts (which will be a waste for small network requirements). Thus, we do away with
any notion of fixed sizes for the network and host addresses.
We classify networks as follows:
1. Large Networks : 8-bit network address and 24-bit host address. There are
approximately 16 million hosts per network and a maximum of 126 ( 2^7 - 2 )
Class A networks can be defined. The calculation requires that 2 be subtracted
because 0.0.0.0 is reserved for use as the default route and 127.0.0.0 be reserved
for the loop back function. Moreover each Class A network can support a
maximum of 16,777,214 (2^24 - 2) hosts per network. The host calculation
requires that 2 be subtracted because all 0's are reserved to identify the network
itself and all 1s are reserved for broadcast addresses. The reserved numbers may
not be assigned to individual hosts.
2. Medium Networks : 16-bit network address and 16-bit host address. There are
approximately 65000 hosts per network and a maximum of 16,384 (2^14) Class B
networks can be defined with up to (2^16-2) hosts per network.
3. Small networks : 24-bit network address and 8-bit host address. There are
approximately 250 hosts per network.
You might think that Large and Medium networks are sort of a waste as few
corporations/organizations are large enough to have 65000 different hosts. (By the way,
there are very few corporations in the world with even close to 65000 employees, and
even in these corporations it is highly unlikely that each employee has his/her own
computer connected to the network.) Well, if you think so, you're right. This decision
seems to have been a mistak
Address Classes
Internet Protocol
LECTURE NOTE 27
Subnetting
Sub netting means organizing hierarchies within the network by dividing the host ID as
per our network. For example consider the network ID : 150.29.x.y
We could organize the remaining 16 bits in any way, like :
4 bits - department
4 bits - LAN
8 bits - host
This gives some structure to the host IDs. This division is not visible to the outside world.
They still see just the network number, and host number (as a whole). The network will
have an internal routing table which stores information about which router to send an
address to. Now consider the case where we have : 8 bits - subnet number, and 8 bits -
host number. Each router on the network must know about all subnet numbers. This is
called the subnet mask. We put the network number and subnet number bits as 1 and the
host bits as 0. Therefore, in this example the subnet mask becomes : 255.255.255.0 . The
hosts also need to know the subnet mask when they send a packet. To find if two
addresses are on the same subnet, we can AND source address with subnet mask, and
destination address with with subnet mask, and see if the two results are the same. The
basic reason for sub netting was avoiding broadcast. But if at the lower level, our
switches are smart enough to send directed messages, then we do not need sub netting.
However, sub netting has some security related advantages.
Supernetting
This is moving towards class-less addressing. We could say that the network number is
21 bits ( for 8 class C networks ) or say that it is 24 bits and 7 numbers following that.
For example : a.b.c.d / 21 This means only look at the first 21 bits as the network address.
LECTURE NOTE 28
Packet Structure
1. Header Length : We could have multiple sized headers so we need this field.
Header will always be a multiple of 4bytes and so we can have a maximum length
of the field as 15, so the maximum size of the header is 60 bytes ( 20 bytes are
mandatory ).
2. Type Of Service (ToS) : This helps the router in taking the right routing
decisions. The structure is :
First three bits : They specify the precedences i.e. the priority of the packets.
Next three bits :
o D bit - D stands for delay. If the D bit is set to 1, then this means that the
application is delay sensitive, so we should try to route the packet with
minimum delay.
o T bit - T stands for throughput. This tells us that this particular operation is
throughput sensitive.
o R bit - R stands for reliability. This tells us that we should route this packet
through a more reliable network.
Last two bits: The last two bits are never used. Unfortunately, no router in this
world looks at these bits and so no application sets them nowadays. The second
word is meant for handling fragmentations. If a link cannot transmit large packets,
then we fragment the packet and put sufficient information in the header for
recollection at the destination.
3. ID Field : The source and ID field together will represent the fragments of a
unique packet. So each fragment will have a different ID.
4. Offset : It is a 13 bit field that represents where in the packet, the current
fragment starts. Each bit represents 8 bytes of the packet. So the packet size can
be at most 64 kB. Every fragment except the last one must have its size in bytes as
a multiple of 8 in order to ensure compliance with this structure. The reason why
the position of a fragment is given as an offset value instead of simply numbering
each packet is because refragmentation may occur somewhere on the path to the
other node. Fragmentation, though supported by IPv4 is not encouraged. This is
because if even one fragment is lost the entire packet needs to be discarded. A
quantity M.T.U (Maximum Transmission Unit) is defined for each link in the
route. It is the size of the largest packet that can be handled by the link. The Path-
M.T.U is then defined as the size of the largest packet that can be handled by the
path. It is the smallest of all the MTUs along the path. Given information about
the path MTU we can send packets with sizes smaller than the path MTU and thus
prevent fragmentation. This will not completely prevent it because routing tables
may change leading to a change in the path.
5. Flags :It has three bits -
o M bit : If M is one, then there are more fragments on the way and if M is
0, then it is the last fragment
o DF bit : If this bit is sent to 1, then we should not fragment such a packet.
o Reserved bit : This bit is not used.
Reassembly can be done only at the destination and not at any intermediate node.
This is because we are considering Datagram Service and so it is not guaranteed
that all the fragments of the packet will be sent thorough the node at which we
wish to do reassembly.
6. Total Length : It includes the IP header and everything that comes after it.
7. Time To Live (TTL) : Using this field, we can set the time within which the
packet should be delivered or else destroyed. It is strictly treated as the number of
hops. The packet should reach the destination in this number of hops. Every
router decreases the value as the packet goes through it and if this value becomes
zero at a particular router, it can be destroyed.
8. Protocol : This specifies the module to which we should hand over the packet
( UDP or TCP ). It is the next encapsulated protocol.
Value Protocol
0 Pv6 Hop-by-Hop Option.
1 ICMP, Internet Control Message Protocol.
2 IGMP, Internet Group Management Protocol. RGMP,
Router-port Group Management Protocol.
3 GGP, Gateway to Gateway Protocol.
4 IP in IP encapsulation.
5 ST, Internet Stream Protocol.
6 TCP, Transmission Control Protocol.
7 UCL, CBT.
8 EGP, Exterior Gateway Protocol.
9 IGRP.
10 BBN RCC Monitoring.
11 NVP, Network Voice Protocol.
12 PUP.
13 ARGUS.
14 EMCON, Emission Control Protocol.
15 XNET, Cross Net Debugger.
16 Chaos.
17 UDP, User Datagram Protocol.
18 TMux, Transport Multiplexing Protocol.
19 DCN Measurement Subsystems.
-
-
255
9. Header Checksum : This is the usual checksum field used to detect errors. Since
the TTL field is changing at every router so the header checksum ( upto the
options field ) is checked and recalculated at every router.
10. Source : It is the IP address of the source node
11. Destination : It is the IP address of the destination node.
12. IP Options : The options field was created in order to allow features to be added
into IP as time passes and requirements change. Currently 5 options are specified
although not all routers support them. They are:
o Securtiy: It tells us how secret the information is. In theory a military
router might use this field to specify not to route through certain routers.
In practice no routers support this field.
o Source Routing: It is used when we want the source to dictate how the
packet traverses the network. It is of 2 types
-> Loose Source Record Routing (LSRR): It requires that the packet
traverse a list of specified routers, in the order specified but the packet
may pass though some other routers as well.
-> Strict Source Record Routing (SSRR): It requires that the packet
traverse only the set of specified routers and nothing else. If it is not
possible, the packet is dropped with an error message sent to the host.
The above is the format for SSRR. For LSRR the code is 131.
o Record Routing :
In this the intermediate routers put there IP addresses in the header, so that
the destination knows the entire path of the packet. Space for storing the
IP address is specified by the source itself. The pointer field points to the
position where the next IP address has to be written. Length field gives the
number of bytes reserved by the source for writing the IP addresses. If the
space provided for storing the IP addresses of the routers visited, falls
short while storing these addresses, then the subsequent routers do not
write their IP addresses.
It is similar to record route option except that nodes also add their
timestamps to the packet. The new fields in this option are
-> Flags: It can have the following values
-> Overflow: It stores the number of nodes that were unable to add their
timestamps to the packet. The maximum value is 15.
For all options a length field is put in order that a router not familiar with
the option will know how many bytes to skip. Thus every option is of the
form
The network layer is concerned with getting packets from the source all the way to the
destnation. The packets may require to make many hops at the intermediate routers while
reaching the destination. This is the lowest layer that deals with end to end transmission.
In order to achieve its goals, the network later must know about the topology of the
communication network. It must also take care to choose routes to avoid overloading of
some of the communication lines while leaving others idle. The main functions
performed by the network layer are as follows:
• Routing
• Congestion Control
• Internetwokring
LECTURE NOTE 28
Routing
LECTURE NOTE 29
Random Walk: In this method a packet is sent by the node to one of its neighbours
randomly. This algorithm is highly robust. When the network is highly interconnected,
this algorithm has the property of making excellent use of alternative routes. It is usually
implemented by sending the packet onto the least queued link.
Delta Routing
Delta routing is a hybrid of the centralized and isolated routing algorithms. Here each
node computes the cost of each line (i.e some functions of the delay, queue length,
utilization, bandwidth etc) and periodically sends a packet to the central node giving it
these values which then computes the k best paths from node i to node j. Let Cij1 be the
cost of the best i-j path, Cij2 the cost of the next best path and so on.If Cijn - Cij1 <
delta, (Cijn - cost of n'th best i-j path, delta is some constant) then path n is regarded
equivalent to the best i-j path since their cost differ by so little. When delta -> 0 this
algorithm becomes centralized routing and when delta -> infinity all the paths become
equivalent.
Multipath Routing
In the above algorithms it has been assumed that there is a single best path between any
pair of nodes and that all traffic between them should use it. In many networks however
there are several paths between pairs of nodes that are almost equally good. Sometimes in
order to improve the performance multiple paths between single pair of nodes are used.
This technique is called multipath routing or bifurcated routing. In this each node
maintains a table with one row for each possible destination node. A row gives the best,
second best, third best, etc outgoing line for that destination, together with a relative
weight. Before forwarding a packet, the node generates a random number and then
chooses among the alternatives, using the weights as probabilities. The tables are worked
out manually and loaded into the nodes before the network is brought up and not changed
thereafter.
Hierarchical Routing
In this method of routing the nodes are divided into regions based on hierarchy. A
particular node can communicate with nodes at the same hierarchial level or the nodes at
a lower level and directly under it. Here, the path from any source to a destination is fixed
and is exactly one if the heirarchy is a tree.
LECTURE NOTE 30
Routing Algorithms
Non-Hierarchical Routing
In this type of routing, interconnected networks are viewed as a single network, where
bridges, routers and gateways are just additional nodes.
• Every node keeps information about every other node in the network
• In case of adaptive routing, the routing calculations are done and updated for all
the nodes.
The above two are also the disadvantages of non-hierarchical routing, since the table
sizes and the routing calculations become too large as the networks get bigger. So this
type of routing is feasible only for small networks.
Hierarchical Routing
This is essentially a 'Divide and Conquer' strategy. The network is divided into different
regions and a router for a particular region knows only about its own domain and other
routers. Thus, the network is viewed at two levels:
1. The Sub-network level, where each node in a region has information about its
peers in the same region and about the region's interface with other regions.
Different regions may have different 'local' routing algorithms. Each local
algorithm handles the traffic between nodes of the same region and also directs
the outgoing packets to the appropriate interface.
2. The Network Level, where each region is considered as a single node connected
to its interface nodes. The routing algorithms at this level handle the routing of
packets between two interface nodes, and is isolated from intra-regional transfer.
Networks can be organized in hierarchies of many levels; e.g. local networks of a city at
one level, the cities of a country at a level above it, and finally the network of all nations.
• All nodes in its region which are at one level below it.
• Its peer interfaces.
• At least one interface at a level above it, for outgoing packages.
Disadvantage :
• Once the hierarchy is imposed on the network, it is followed and possibility of
direct paths is ignored. This may lead to sub optimal routing.
Source Routing
Advantages:
• Bridges do not need to lookup their routing tables since the path is already
specified in the packet itself.
• The throughput of the bridges is higher, and this may lead to better utilization of
bandwidth, once a route is established.
Disadvantages:
• Establishing the route at first needs an expensive search method like flooding.
• To cope up with dynamic relocation of nodes in a network, frequent updates of
tables are required, else all packets would be sent in wrong direction. This too is
expensive.
In this type of routing, certain restrictions are put on the type of packets accepted and
sent. e.g.. The IIT- K router may decide to handle traffic pertaining to its departments
only, and reject packets from other routes. This kind of routing is used for links with very
low capacity or for security purposes.
LECTURE NOTE 31
Shortest Path Routing
Here, the central question dealt with is 'How to determine the optimal path for routing ?'
Various algorithms are used to determine the optimal routes with respect to some
predetermined criteria. A network is represented as a graph, with its terminals as nodes
and the links as edges. A 'length' is associated with each edge, which represents the cost
of using the link for transmission. Lower the cost, more suitable is the link. The cost is
determined depending upon the criteria to be optimized. Some of the important ways of
determining the cost are:
• Minimum number of hops: If each link is given a unit cost, the shortest path is
the one with minimum number of hops. Such a route is easily obtained by a
breadth first search method. This is easy to implement but ignores load, link
capacity etc.
• Transmission and Propagation Delays: If the cost is fixed as a function of
transmission and propagation delays, it will reflect the link capacities and the
geographical distances. However these costs are essentially static and do not
consider the varying load conditions.
• Queuing Delays: If the cost of a link is determined through its queuing delays, it
takes care of the varying load conditions, but not of the propagation delays.
Ideally, the cost parameter should consider all the above mentioned factors, and it should
be updated periodically to reflect the changes in the loading conditions. However, if the
routes are changed according to the load, the load changes again. This feedback effect
between routing and load can lead to undesirable oscillations and sudden swings.
Routing Algorithms
As mentioned above, the shortest paths are calculated using suitable algorithms on the
graph representations of the networks. Let the network be represented by graph G ( V,
E ) and let the number of nodes be 'N'. For all the algorithms discussed below, the costs
associated with the links are assumed to be positive. A node has zero cost w.r.t itself.
Further, all the links are assumed to be symmetric, i.e. if di,j = cost of link from node i
to node j, then d i,j = d j,i . The graph is assumed to be complete. If there exists no edge
between two nodes, then a link of infinite cost is assumed. The algorithms given below
find costs of the paths from all nodes to a particular node; the problem is equivalent to
finding the cost of paths from a source to all destinations.
LECTURE NOTE 31
Bellman-Ford Algorithm
This algorithm iterates on the number of edges in a path to obtain the shortest path. Since
the number of hops possible is limited (cycles are implicitly not allowed), the algorithm
terminates giving the shortest path.
Notation:
d i,j = Length of path between nodes i and j, indicating the cost of the link.
h = Number of hops.
D[ i,h] = Shortest path length from node i to node 1, with upto 'h' hops.
D[ 1,h] = 0 for all h .
Algorithm :
Principle:
For zero hops, the minimum length path has length of infinity, for every node. For one
hop the shortest-path length associated with a node is equal to the length of the edge
between that node and node 1. Hereafter, we increment the number of hops allowed,
(from h to h+1 ) and find out whether a shorter path exists through each of the other
nodes. If it exists, say through node 'j', then its length must be the sum of the lengths
between these two nodes (i.e. di,j ) and the shortest path between j and 1 obtainable in
upto h paths. If such a path doesn't exist, then the path length remains the same. The
algorithm is guaranteed to terminate, since there are utmost N nodes, and so N-1 paths. It
has time complexity of O ( N3 ) .
Dijkstra's Algorithm
Notation:
Di = Length of shortest path from node 'i' to node 1.
di,j = Length of path between nodes i and j .
Algorithm
Each node j is labeled with Dj, which is an estimate of cost of path from node j to node
1. Initially, let the estimates be infinity, indicating that nothing is known about the paths.
We now iterate on the length of paths, each time revising our estimate to lower values, as
we obtain them. Actually, we divide the nodes into two groups ; the first one, called set P
contains the nodes whose shortest distances have been found, and the other Q containing
all the remaining nodes. Initially P contains only the node 1. At each step, we select the
node that has minimum cost path to node 1. This node is transferred to set P. At the first
step, this corresponds to shifting the node closest to 1 in P. Its minimum cost to node 1 is
now known. At the next step, select the next closest node from set Q and update the
labels corresponding to each node using :
Dj = min [ Dj , Di + dj,i ]
Finally, after N-1 iterations, the shortest paths for all nodes are known, and the algorithm
terminates.
Principle
Let the closest node to 1 at some step be i. Then i is shifted to P. Now, for each node j ,
the closest path to 1 either passes through i or it doesn't. In the first case Dj remains the
same. In the second case, the revised estimate of Dj is the sum Di + di,j . So we take the
minimum of these two cases and update Dj accordingly. As each of the nodes get
transferred to set P, the estimates get closer to the lowest possible value. When a node is
transferred, its shortest path length is known. So finally all the nodes are in P and the Dj 's
represent the minimum costs. The algorithm is guaranteed to terminate in N-1 iterations
and its complexity is O( N2 ).
LECTURE NOTE 32
This algorithm iterates on the set of nodes that can be used as intermediate nodes on
paths. This set grows from a single node ( say node 1 ) at start to finally all the nodes of
the graph. At each iteration, we find the shortest path using given set of nodes as
intermediate nodes, so that finally all the shortest paths are obtained.
Notation
Di,j [n] = Length of shortest path between the nodes i and j using only the nodes
1,2,....n as intermediate nodes.
Initial Condition
Di,j[0] = di,j for all nodes i,j .
Algorithm
Initially, n = 0. At each iteration, add next node to n. i.e. For n = 1,2, .....N-1 ,
Principle
Suppose the shortest path between i and j using nodes 1,2,...n is known. Now, if node n+1
is allowed to be an intermediate node, then the shortest path under new conditions either
passes through node n+1 or it doesn't. If it does not pass through the node n+1, then
Di,j[n+1] is same as Di,j[n] . Else, we find the cost of the new route, which is obtained
from the sum, Di,n+1[n] + Dn+1,j[n]. So we take the minimum of these two cases at each
step. After adding all the nodes to the set of intermediate nodes, we obtain the shortest
paths between all pairs of nodes together. The complexity of Floyd-Warshall algorithm
is O ( N3 ).
It is observed that all the three algorithms mentioned above give comparable
performance, depending upon the exact topology of the network
LECTURE NOTE 33
ARP,RARP,ICMP Protocols
Address Resolution Protocol
If a machine talks to another machine in the same network, it requires its physical or
MAC address. But ,since the application has given the destination's IP address it requires
some mechanism to bind the IP address with its MAC address.This is done through
Address Resolution protocol (ARP).IP address of the destination node is broadcast and
the destination node informs the source of its MAC address.
But this means that every time machine A wants to send packets to machine B, A has to
send an ARP packet to resolve the MAC address of B and hence this will increase the
traffic load too much, so to reduce the communication cost computers that use ARP
maintains a cache of recently acquired IP_to_MAC address bindings, i.e. they dont have
to use ARP repeatedly. ARP Refinements Several refinements of ARP are possible: When
machine A wants to send packets to macine B, it is possible that machine B is going to
send packets to machine A in the near future.So to avoid ARP for machine B, A should
put its IP_to_MAC address binding in the special packet while requesting for the MAC
address of B. Since A broadcasts its initial request for the MAC address of B, every
machine on the network should extract and store in its cache the IP_to_MAC address
binding of A When a new machine appears on the network (e.g. when an operating
system reboots) it can broadcast its IP_to_MAC address binding so that all other
machines can store it in their caches. This will eliminate a lot of ARP packets by all other
machines, when they want to communicate with this new machine.
Consider a scenario where a computer tries to contact some remote machine using ping
program, assuming that there has been no exchange of IP datagrams previously between
the two machines and therefore arp packet must be sent to identify the MAC address of
the remote machine.
The arp request message (who is A.A.A.A tell B.B.B.B where the two are IP addresses) is
broadcast on the local area network with an Ethernet protocol type 0x806. The packet is
discarded by all the machines except the target machine which responds with an arp
response message (A.A.A.A is hh:hh:hh:hh:hh:hh where hh:hh:hh:hh:hh:hh is the
Ethernet source address). This packet is unicast to the machine with IP address B.B.B.B.
Since the arp request message included the hardware address (Ethernet source address) of
the requesting computer, target machine doesn't require another arp message to figure it
out.
Reverse Address Resolution Protocol
RARP is a protocol by which a physical machine in a local area network can request to
learn its IP address from a gateway server's Address Resolution Protocol table or cache.
This is needed since the machine may not have permanently attacded disk where it can
store its IP address permanently. A network administrator creates a table in a local area
network's gateway router that maps the physical machine (or Medium Access Control -
MAC) addresses to corresponding Internet Protocol addresses. When a new machine is
set up, its RARP client program requests from the RARP server on the router to be sent
its IP address. Assuming that an entry has been set up in the router table, the RARP server
will return the IP address to the machine which can store it for future use.
Detailed Mechanism
Both the machine that issues the request and the server that responds use physical
network addresses during their brief communication. Usually, the requester does not
know the physical address. So, the request is broadcasted to all the machines on the
network. Now, the requester must identify istelf uniquely to the server. For this either
CPU serial number or the machine's physical network address can be used. But using the
physical address as a unique id has two advantages.
• These addresses are always available and do not have to be bound into bootstrap
code.
• Because the identifying information depends on the network and not on the CPU
vendor, all machines on a given network will supply unique identifiers.
Request:
Like an ARP message, a RARP message is sent from one machine to the another
encapsulated in the data portion of a network frame. An ethernet frame carrying a RARP
request has the usual preamle, Ethernet source and destination addresses, and packet type
fields in front of the frame. The frame conatins the value 8035 (base 16) to identify the
contents of the frame as a RARP message. The data portion of the frame contains the 28-
octet RARP message. The sender braodcasts a RARP request that specifies itself as both
the sender and target machine, and supplies its physical network address in the target
hardware address field. All machines on the network receive the request, but only those
authorised to supply the RARP services process the request and send a reply, such
machines are known informally as RARP servers. For RARP to succeed, the network
must contain at least one RARP server.
Reply:
Servers answers request by filling in the target protocol address field, changing the
message type from request to reply, and sending the reply back directly to the machine
making the request.
Drawbacks of RARP
• Since it operates at low level, it requires direct addresss to the network which
makes it difficult for an application programmer to build a server.
• It doesn't fully utilizes the capability of a network like ethernet which is enforced
to send a minimum packet size since the reply from the server contains only one
small piece of information, the 32-bit internet address.
LECTURE NOTE 35
ICMP
This protocol discusses a mechanism that gateways and hosts use to communicate control
or error information.The Internet protocol provides unreliable,connectionless datagram
service,and that a datagram travels from gateway to gateway until it reaches one that can
deliver it directly to its final destination. If a gateway cannot route or deliver a
datagram,or if the gateway detects an unusual condition, like network congestion, that
affects its ability to forward the datagram, it needs to instruct the original source to take
action to avoid or correct the problem. The Internet Control Message Protocol allows
gateways to send error or control messages to other gateways or hosts;ICMP provides
communication between the Internet Protocol software on one machine and the Internet
Protocol software on another. This is a special purpose message mechanism added by the
designers to the TCP/IP protocols. This is to allow gateways in an internet to report errors
or provide information about unexpecter circumstances. The IP protocol itself contains
nothing to help the sender test connectivity or learn about failures.
0 ECHO REPLY
3 DESTINATION UNREACHABLE
4 SOURCE QUENCH
5 REDIRECT(CHANGE A ROUTE)
8 ECHO REQUEST
11 TIME EXCEEDED FOR A DATAGRAM
12 PARAMETER PROBLEM ON A DATAGRAM
13 TIMESTAMP REQUEST
14 TIMESTAMP REPLY
15 INFORMATION REQUEST(OBSOLETE)
16 INFORMATION REPLY(OBSOLETE)
17 ADDRESS MASK REQUEST
18 ADDRESS MASK REPLY TESTING DESTINATION
0 NETWORK UNREACHABLE
1 HOST UNREACHABLE
2 PROTOCOL UNREACHABLE
3 PORT UNREACHABLE
4 FRAGMENTATION NEEDED AND DF SET
5 SOURCE ROOT FAILED
6 DESTINATION NETWORK UNKNOWN
7 DESTINATION HOST UNKNOWN
8 SOURCE HOST ISOLATED
9 COMMUNICATION WITH DESTINATION NETWORK
ADMINISTRATIVELY PROHIBITED
10 COMMUNICATION WTTH DESTINATION HOST
ADMINISTRATIVELY PROHIBITED
11 NETWORK UNREACHABLE FOR TYPE OF SERVICE
12 HOST UNREACHABLE FOR TYPE OF SERVICE
1. A high speed computer may be able to generate traffic faster than a network can
transfer it .
2. If many computers sumultaneously need to send datagrams through a single
gateway , the gateway can experience congestion, even though no single source
causes the problem.
When datagrams arrive too quickly for a host or a gateway to process, it enqueues them
in memory temporarily.If the traffic continues, the host or gateway eventually exhausts
menory ans must discard additional datagrams that arrive. A machine uses ICMP source
quench messages to releive congestion. A source quench message is a request for the
source to reduce its current rate of datagram transmission.
There is no ICMP messages to reverse the effect of a source quench.
Source Quench :
Source quench messages have a field that contains a datagram prefix in addition to the
usual ICMP TYPE,CODE,CHECKSUM fields.Congested gateways send one source
quench message each time they discard a datagram; the datagram prefix identifies the
datagram that was dropped.