100% found this document useful (1 vote)
199 views

To Network Layer

77987

Uploaded by

srvsbond
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
199 views

To Network Layer

77987

Uploaded by

srvsbond
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 130

Chapter 18

Introduction
to
Network
Layer
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
18-1 NETWORK-LAYER SERVICES

Before discussing the network layer in the


Internet today, let’s briefly discuss the
network-layer services that, in general, are
expected from a network-layer protocol.

18.2
Figure 18.1: Communication at the network layer

18.3
Figure 18.1: Communication at the network layer

In this scenario, scientist working in a research company, Sky Research, needs to


order a book related to her research from an online bookseller, Scientific Books.

We can think of five different levels of communication between Alice, the


computer on which our scientist is working, and Bob, the computer that provides
online service. Communication at application, transport, network, or data-link is
logical; communication at the physical layer is physical. For simplicity, we have
shown only host-to-router, router-to-router, and router-to-host, but the switches
are also involved in the physical communication.

18.4
Figure 18.1: Communication at the network layer

The figure shows that the Internet is made of many networks (or links) connected
through the connecting devices. In other words, the Internet is an internetwork, a
combination of LANs and WANs. To better understand the role of the network
layer (or the internetwork layer), we need to think about the connecting devices
(routers or switches) that connect the LANs and WANs.

As the figure shows, the network layer is involved at the source host, destination
host, and all routers in the path (R2, R4, R5, and R7). At the source host (Alice),
the network layer accepts a packet from a transport layer, encapsulates the packet
in a datagram, and delivers the packet to the data-link layer. At the destination
host (Bob), the datagram is decapsulated, and the packet is extracted and
delivered to the corresponding transport layer. Although the source and
destination hosts are involved in all five layers of the TCP/IP suite, the routers
use three layers if they are routing packets only; however, they may need the
transport and application layers for control purposes. A router in the path is
normally shown with two data-link layers and two physical layers, because it
receives a packet from one network and delivers it to another network.

18.5
18.1.1 Packetizing

The first duty of the network layer is definitely packetizing: encapsulating


the payload (data received from upper layer) in a network-layer packet at
the source and decapsulating the payload from the network-layer packet at
the destination. In other words, one duty of the network layer is to carry a
payload from the source to the destination without changing it or using it.
The network layer is doing the service of a carrier such as the postal office,
which is responsible for delivery of packages from a sender to a receiver
without changing or using the contents.

The source host receives the payload from an upper-layer protocol, adds a
header that contains the source and destination addresses and some other
information that is required by the network-layer protocol (as discussed
later) and delivers the packet to the data-link layer. The source is not
allowed to change the content of the payload unless it is too large for
delivery and needs to be fragmented.

18.6
18.1.1 Packetizing

The destination host receives the network-layer packet from its data-link
layer, decapsulates the packet, and delivers the payload to the
corresponding upper-layer protocol. If the packet is fragmented at the
source or at routers along the path, the network layer is responsible for
waiting until all fragments arrive, reassembling them, and delivering them
to the upper-layer protocol.

The routers in the path are not allowed to decapsulate the packets they
received unless the packets need to be fragmented. The routers are not
allowed to change source and destination addresses either. They just
inspect the addresses for the purpose of forwarding the packet to the next
network on the path. However, if a packet is fragmented, the header needs
to be copied to all fragments and some changes are needed, as we discuss
in detail later.

18.7
18.1.2 Routing and Forwarding

Other duties of the network layer, which are as


important as the first, are routing and forwarding,
which are directly related to each other.

18.8
18.1.2 Routing and Forwarding

Routing
The network layer is responsible for routing the packet from its source to
the destination. A physical network is a combination of networks (LANs
and WANs) and routers that connect them. This means that there is more
than one route from the source to the destination. The network layer is
responsible for finding the best one among these possible routes. The
network layer needs to have some specific strategies for defining the
best route. In the Internet today, this is done by running some routing
protocols to help the routers coordinate their knowledge about the
neighborhood and to come up with consistent tables to be used when a
packet arrives. The routing protocols, which we discuss in Chapters 20 and
21, should be run before any communication occurs.

18.9
18.1.2 Routing and Forwarding

Forwarding
If routing is applying strategies and running some routing protocols to
create the decision-making tables for each router, forwarding can be
defined as the action applied by each router when a packet arrives at one of
its interfaces. The decision-making table a router normally uses for
applying this action is sometimes called the forwarding table and
sometimes the routing table. When a router receives a packet from one of
its attached networks, it needs to forward the packet to another attached
network (in
unicast routing) or to some attached networks (in multicast routing). To
make this decision, the router uses a piece of information in the packet
header, which can be the destination address or a label, to find the
corresponding output interface number in the forwarding table. Figure 18.2
shows the idea of the forwarding process in a router.

18.10
Figure 18.2: Forwarding process

18.11
18-2 PACKET SWITCHING

From the discussion of routing and


forwarding in the previous section, we infer
that a kind of switching occurs at the network
layer. A router, in fact, is a switch that creates
a connection between an input port and an
output port (or a set of output ports.

18.12
Switching techniques
In data communication switching techniques are divided into two broad
categories, circuit switching and packet switching.

Circuit switching is mostly used at the physical layer.

Only packet switching is used at the network layer (the unit of data at this
layer is a packet).

At the network layer, a message from the upper layer is divided into
manageable packets and each packet is sent through the network. The
source of the message sends the packets one by one; the destination of the
message receives the packets one by one. The destination waits for all
packets belonging to the same message to arrive before delivering the
message to the upper layer. The connecting devices in a packet-switched
network still need to decide how to route the packets to the final
destination. Today, a packet-switched network can use two different
approaches to route the packets: the datagram approach and the virtual
circuit approach.

18.13
18.2.1 Datagram Approach

When the Internet started, to make it simple, the


network layer was designed to provide a
connectionless service in which the network-layer
protocol treats each packet independently, with each
packet having no relationship to any other packet.
The idea was that the network layer is only
responsible for delivery of packets from the source to
the destination. In this approach, the packets in a
message may or may not travel the same path to
their destination. Figure 18.3 shows the idea..

18.14
Figure 18.3: A connectionless packet-switched network

18.15
Datagram Approach

When the network layer provides a connectionless service, each packet


traveling in the Internet is an independent entity; there is no relationship
between packets belonging to the same message. The switches in this type
of network are called routers. A packet belonging to a message may be
followed by a packet belonging to the same message or to a different
message. A packet may be followed by a packet coming from the same or
from a different source.

Each packet is routed based on the information contained in its header:


source and destination addresses. The destination address defines where
it should go; the source address defines where it comes from. The router in
this case routes the packet based only on the destination address. The
source address may be used to send an error message to the source if the
packet is discarded. Figure 18.4 shows the forwarding process in a router in
this case. We have used symbolic addresses such as A and B.

18.16
Figure 18.4: Forwarding process in a router when used in a
connectionless network

SA DA Data SA DA Data

18.17
Datagram Approach

The datagram networks are sometimes referred to as connectionless


networks. The term connectionless here means that the switch (packet
switch) does not keep information about the connection state. There are no
setup or teardown phases. Each packet is treated the same by a switch
regardless of its source or destination.

In other words, there is no resource allocation for a packet. This means that
there is no reserved bandwidth on the links, and there is no scheduled
processing time for each packet. Resources are allocated on demand. The
allocation is done on a first come, first-served basis. When a switch
receives a packet, no matter what is the source or destination, the packet
must wait if there are other packets being processed. As with other systems
in our daily life, this lack of reservation may create delay. For example, if
we do not have a reservation at a restaurant, we might have to wait.

18.18
18.2.2 Virtual-Circuit Approach

In a connection-oriented service (also called virtual-circuit


approach), there is a relationship between all packets
belonging to a message. Before all datagrams in a message can
be sent, a virtual connection should be set up to define the path
for the datagrams. After connection setup, the datagrams can
all follow the same path. In this type of service, not only must
the packet contain the source and destination addresses, it must
also contain a flow label, a virtual circuit identifier that defines
the virtual path the packet should follow. Shortly, we will
show how this flow label is determined, but for the moment,
we assume that the packet carries this label.

18.19
Figure 18.5 shows the concept of connection-oriented service.

Four datagrams (1,2,3,4) belonging to the same message follow the same path
(SendeR1R3R4Receiver).

18.20
Virtual-Circuit Approach
Each packet is forwarded based on the label in the packet. To
follow the idea of connection-oriented design to be used in the
Internet, we assume that the packet has a label when it reaches the
router. Figure 18.6 shows the idea. In this case, the forwarding
decision is based on the value of the label, or virtual circuit
identifier, as it is sometimes called.

18.21
Virtual-Circuit Approach

To create a connection-oriented service, a three-phase process is


used: setup, data transfer, and teardown.

In the setup phase, the source and destination addresses of the


sender and receiver are used to make table entries for the
connection-oriented service.

In the teardown phase, the source and destination inform the router
to delete the corresponding entries.

Data transfer occurs between these two phases.

18.22
Virtual-Circuit Approach

Setup phase
In the setup phase, a router creates an entry for a virtual circuit.
For example, suppose source A needs to create a virtual circuit
to destination B. Two auxiliary packets need to be exchanged
between the sender and the receiver: the request packet and the
acknowledgment packet.

18.23
Virtual-Circuit Approach

Setup phase—Request packet


A request packet is sent from the source to the destination. This auxiliary packet
carries the source and destination addresses. Figure 18.7 shows the process.

18.24
Virtual-Circuit Approach

Setup phase—Request packet

18.25
Virtual-Circuit Approach

Setup phase—Acknowledge packet


A special packet, called the acknowledgment packet, completes the entries in the
switching tables. Figure 18.8 shows the process.

18.26
Virtual-Circuit Approach

Setup phase—Acknowledge packet

18.27
Virtual-Circuit Approach

Data-Transfer phase
The second phase is called the data-transfer phase. After all routers have
created their forwarding table for a specific virtual circuit, then the network-
layer packets belonging to one message can be sent one after another. In
Figure 18.9, we show the flow of a single packet, but the process is the same
for 1, 2, or 100 packets.

18.28
Virtual-Circuit Approach
Data-Transfer phase
The source computer uses the label 14, which it has received from router R1
in the setup phase. Router R1 forwards the packet to router R3, but changes
the label to 66. Router R3 forwards the packet to router R4, but changes the
label to 22. Finally, router R4 delivers the packet to its final destination with
the label 77. All the packets in the message follow the same sequence of
labels, and the packets arrive in order at the destination.

18.29
Virtual-Circuit Approach

Teardown phase
In the teardown phase, source A, after sending all packets to B, sends a
special packet called a teardown packet. Destination B responds with a
confirmation packet. All routers delete the corresponding entries from their
tables.

18.30
18-3 NETWORK-LAYER PERFORMANCE

The upper-layer protocols that use the service


of the network layer expect to receive an ideal
service, but the network layer is not perfect.
The performance of a network can be
measured in terms of delay, throughput, and
packet loss. Congestion control is an issue that
can improve the performance.

18.31
18.3.1 Delay

All of us expect instantaneous response from a


network, but a packet, from its source to its
destination, encounters delays. The delays in a
network can be divided into four types: transmission
delay, propagation delay, processing delay, and
queuing delay. Let us first discuss each of these
delay types and then show how to calculate a packet
delay from the source to the destination..

18.32
18.3.1 Delay

Transmission delay

18.33
18.3.1 Delay

Propagation delay

18.34
18.3.1 Delay

Processing delay

18.35
18.3.1 Delay

Queuing delay

18.36
18.3.1 Delay

Total delay

18.37
18.3.2 Throughput

Throughput at any point in a network is defined as


the number of bits passing through the point in a
second, which is actually the transmission rate of
data at that point. In a path from source to
destination, a packet may pass through several links
(networks), each with a different transmission rate.
How, then, can we determine the throughput of the
whole path? To see the situation, assume that we
have three links, each with a different transmission
rate, as shown in Figure 18.10.

18.38
Throughput

To see the situation, assume that we have three links, each with
a different transmission rate, as shown in Figure 18.10.

18.39
Throughput

In this figure, the data can flow at the rate of 200 kbps in Link1. However,
when the data arrives at router R1, it cannot pass at this rate. Data needs to
be queued at the router and sent at 100 kbps. When data arrives at router
R2, it could be sent at the rate of 150 kbps, but there is not enough data to
be sent. In other words, the average rate of the data flow in Link3 is also
100 kbps. We can conclude that the average data rate for this path is 100
kbps, the minimum of the three different data rates.

18.40
Throughput

The figure also shows that we can simulate the behavior of each link with
pipes of different sizes; the average throughput is determined by the
bottleneck, the pipe with the smallest diameter. In general, in a path with n
links in series, we have

18.41
Throughput
Although the situation in Figure 18.10 shows how to calculate the
throughput when the data is passed through several links, the actual
situation in the Internet is that the data normally passes through two access
networks and the Internet backbone, as shown in Figure 18.11.

The Internet backbone has a very high transmission rate, in the range of
gigabits per second. This means that the throughput is normally defined as
the minimum transmission rate of the two access links that connect the
source and destination to the backbone. Figure 18.11 shows this situation,
in which the throughput is the minimum of TR1 and TR2. For example, if a
server connects to the Internet via a Fast Ethernet LAN with the data rate of
100 Mbps, but a user who wants to download a file connects to the Internet
via a dial-up telephone line with the data rate of 40 kbps, the throughput is
40 kbps. The bottleneck is definitely the dial-up line.
18.42
Throughput
We need to mention another situation in which we think about the
throughput. The link between two routers is not always dedicated to one
flow. A router may collect the flow from several sources or distribute the
flow between several sources. In this case the transmission rate of the link
between the two routers is actually shared between the flows and this
should be considered when we calculate the throughput. For example, in
Figure 18.12 the transmission rate of the main link in the calculation of the
throughput is only 200 kbps because the link is shared between three paths.

18.43
18.3.3 Packet Loss

Another issue that severely affects the performance of


communication is the number of packets lost during
transmission. When a router receives a packet while processing
another packet, the received packet needs to be stored in the
input buffer waiting for its turn. A router, however, has an
input buffer with a limited size. A time may come when the
buffer is full and the next packet needs to be dropped. The
effect of packet loss on the Internet network layer is that the
packet needs to be resent, which in turn may create overflow
and cause more packet loss. A lot of theoretical studies have
been done in queuing theory to prevent the overflow of queues
and prevent packet loss.

18.44
18.3.4 Congestion Control

Congestion control is a mechanism for improving


performance. In Chapter 23, we will discuss
congestion at the transport layer. Although
congestion at the network layer is not explicitly
addressed in the Internet model, the study of
congestion at this layer may help us to better
understand the cause of congestion at the transport
layer and find possible remedies to be used at the
network layer. Congestion at the network layer is
related to two issues, throughput and delay, which
we discussed in the previous section.

18.45
Congestion Control

Figure 18.13 shows throughput and delay as functions of load.

18.46
Congestion Control

When the load is much less than the capacity of the network, the delay is at
a minimum. This minimum delay is composed of propagation delay and
processing delay, both of which are negligible. However, when the load
reaches the network capacity, the delay increases sharply because we now
need to add the queuing delay to the total delay. Note that the delay
becomes infinite when the load is greater than the capacity.

18.47
Congestion Control

When the load is below the capacity of the network, the throughput
increases proportionally with the load. We expect the throughput to remain
constant after the load reaches the capacity, but instead the throughput
declines sharply. The reason is the discarding of packets by the routers.
When the load exceeds the capacity, the queues become full and the routers
have to discard some packets. Discarding packets does not reduce the
number of packets in the network because the sources retransmit the
packets, using time-out mechanisms, when the packets do not reach the
destinations.

18.48
Congestion Control

Congestion control refers to techniques and mechanisms that


can either prevent congestion before it happens or remove
congestion after it has happened. In general, we can
divide congestion control mechanisms into two broad
categories: open-loop congestion control (prevention) and
closed-loop congestion control (removal).

18.49
Congestion Control

Open-loop congestion control


In open-loop congestion control, policies are applied to prevent congestion
before it happens. In these mechanisms, congestion control is handled by
either the source or the destination. We give a brief list of policies that can
prevent congestion.

Retransmission Policy. Retransmission is sometimes unavoidable. If the


sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. Retransmission in general may increase congestion in the
network. However, a good retransmission policy can prevent congestion.
The retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion.

18.50
Congestion Control

Open-loop congestion control


Window Policy The type of window at the sender may also affect
congestion. The Selective Repeat window is better than the Go-Back-N
window for congestion control (Discussed later).

Acknowledgment Policy The acknowledgment policy imposed by the


receiver may also affect congestion. If the receiver does not acknowledge
every packet it receives, it may slow down the sender and help prevent
congestion. Several approaches are used in this case. A receiver may send
an acknowledgment only if it has a packet to be sent or a special timer
expires. A receiver may decide to acknowledge only N packets at a time.
We need to know that the acknowledgments are also part of the load in a
network. Sending fewer acknowledgments means imposing less load on the
network.

18.51
Congestion Control

Open-loop congestion control


Discarding Policy A good discarding policy by the routers may prevent
congestion and at the same time may not harm the integrity of the
transmission. For example, in audio transmission, if the policy is to discard
less sensitive packets when congestion is likely to happen, the quality of
sound is still preserved and congestion is prevented or alleviated.

Admission Policy An admission policy, which is a quality-of-service


mechanism (discussed in Chapter 30), can also prevent congestion in
virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny
establishing a virtual-circuit connection if there is congestion in the
network or if there is a possibility of future congestion.

18.52
Congestion Control

Closed-loop congestion control


Closed-loop congestion control mechanisms try to alleviate congestion
after it happens. Several mechanisms have been used by different
protocols. We describe a few of them here.

Backpressure The technique of backpressure refers to a congestion control


mechanism in which a congested node stops receiving data from the
immediate upstream node or nodes. This may cause the upstream node or
nodes to become congested, and they, in turn, reject data from their
upstream node or nodes, and so on. Backpressure is a node-to-node
congestion control that starts with a node and propagates, in the opposite
direction of data flow, to the source. The backpressure technique can be
applied only to virtual circuit networks, in which each node knows the
upstream node from which a flow of data is coming.

18.53
Congestion Control

Closed-loop congestion control--Backpressure


Figure 18.14 shows the idea of backpressure.

Node III in the figure has more input data than it can handle. It drops some
packets in its input buffer and informs node II to slow down. Node II, in
turn, may be congested because it is slowing down the output flow of data.
If node II is congested, it informs node I to slow down, which in turn may
create congestion. If so, node I informs the source of data to slow down.
This, in time, alleviates the congestion. Note that the pressure on node III is
moved backward to the source to remove the congestion.

18.54
Congestion Control

Closed-loop congestion control--Backpressure

It is important to stress that this type of congestion control can only be


implemented in virtual-circuit. The technique cannot be implemented in a
datagram network, in which a node (router) does not have the slightest
knowledge of the upstream router.

18.55
Congestion Control

Closed-loop congestion control


Choke Packet A choke packet is a packet sent by a node to the source to
inform it of congestion. Note the difference between the backpressure and
choke-packet methods. In backpressure, the warning is from one node to its
upstream node, although the warning may eventually reach the source
station. In the choke-packet method, the warning is from the router, which
has encountered congestion, directly to the source station. The intermediate
nodes through which the packet has traveled are not warned. We will see
an example of this type of control in ICMP (discussed in Chapter 19).
When a router in the Internet is overwhelmed with IP datagrams, it may
discard some of them, but it informs the source host, using a source quench
ICMP message. The warning message goes directly to the source station;
the intermediate routers do not take any action. Figure 18.15 shows the idea
of a choke packet.

18.56
Congestion Control

Closed-loop congestion control


Choke Packet A choke packet
Figure 18.15 shows the idea of a choke packet.

18.57
Congestion Control

Closed-loop congestion control


Implicit Signaling In implicit signaling, there is no communication
between the congested node or nodes and the source. The source guesses
that there is congestion somewhere in the network from other symptoms.
For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is
congested. The delay in receiving an acknowledgment is interpreted as
congestion in the network; the source should slow down. We saw this type
of signaling when we discuss TCP congestion control in Chapter 24.

Explicit Signaling The node that experiences congestion can explicitly


send a signal to the source or destination. The explicit-signaling method,
however, is different from the choke-packet method. In the choke-packet
method, a separate packet is used for this purpose; in the explicit-signaling
method, the signal is included in the packets that carry data.

18.58
18-4 IPv4 ADDRESSES

As we discussed in Chapter 2, communication at the network


layer is host-to-host (computer-to-computer); a computer
somewhere in the world needs to communicate with another
computer somewhere else in the world. Usually, computers
communicate through the Internet. The packet transmitted by
the sending computer may pass through several LANs or WANs
before reaching the destination computer.

For this level of communication, we need a global addressing


scheme; we called this logical addressing in Chapter 2. Today,
we use the term IP address to mean a logical address in the
network layer of the TCP/IP protocol suite.

18.59
IPv4 address

An IPv4 address is a 32-bit address that uniquely and


universally defines the connection of a host or a router to the
Internet. The IP address is the address of the connection, not
the host or the router, because if the device is moved to another
network, the IP address may be changed.

IPv4 addresses are unique in the sense that each address


defines one, and only one, connection to the Internet. If a
device has two connections to the Internet, via two networks, it
has two IPv4 addresses. IPv4 addresses are universal in the
sense that the addressing system must be accepted by any host
that wants to be connected to the Internet.

18.60
18.4.1 Address Space

A protocol like IPv4 that defines addresses has an


address space. An address space is the total number
of addresses used by the protocol. If a protocol uses
b bits to define an address, the address space is 2b
because each bit can have two different values (0 or
1). IPv4 uses 32-bit addresses, which means that the
address space is 232 or 4,294,967,296 (more than
four billion). If there were no restrictions, more than
4 billion devices could be connected to the Internet.

18.61
18.4.1 Address Space
Notations
There are three common notations to show an IPv4 address: binary notation
(base 2), dotted-decimal notation (base 256), and hexadecimal notation (base
16).

In binary notation, an IPv4 address is displayed as 32 bits. To make the


address more readable, one or more spaces are usually inserted between each
octet (8 bits). Each octet is often referred to as a byte.

To make the IPv4 address more compact and easier to read, it is usually
written in decimal form with a decimal point (dot) separating the bytes. This
format is referred to as dotted-decimal notation. Note that because each byte
(octet) is only 8 bits, each number in the dotted-decimal notation is between 0
and 255.

We sometimes see an IPv4 address in hexadecimal notation. Each


hexadecimal digit is equivalent to four bits. This means that a 32-bit address
has 8 hexadecimal digits. This notation is often used in network programming.

18.62
Notations
Figure 18.16 shows an IP address in the three discussed notations.

18.63
18.4.1 Address Space
Hierarchy in Addressing
A 32-bit IPv4 address is also hierarchical, but divided only into two parts. The
first part of the address, called the prefix, defines the network; the second part
of the address, called the suffix, defines the node (connection of a device to the
Internet). Figure 18.17 shows the prefix and suffix of a 32-bit IPv4 address.
The prefix length is n bits and the suffix length is (32 - n) bits.

18.64
18.4.1 Address Space
Hierarchy in Addressing
A prefix can be fixed length or variable length. The network identifier in the
IPv4 was first designed as a fixed-length prefix. This scheme, which is now
obsolete, is referred to as classful addressing. The new scheme, which is
referred to as classless addressing, uses a variable-length network prefix. First,
we briefly discuss classful addressing; then we concentrate on classless
addressing.

18.65
18.4.2 Classful Addressing

When the Internet started, an IPv4 address was designed with a fixed-
length prefix, but to accommodate both small and large networks, three
fixed-length prefixes were designed instead of one (n = 8, n = 16, and n =
24). The whole address space was divided into five classes (class A, B, C,
D, and E), as shown in Figure 18.18. This scheme is referred to as classful
addressing. Although classful addressing belongs to the past, it helps us to
understand classless addressing, discussed later.

18.66
Classful Addressing

18.67
18.4.3 Classless Addressing

With the growth of the Internet, it was clear that a


larger address space was needed as a long-term
solution. The larger address space, however,
requires that the length of IP addresses also be
increased, which means the format of the IP packets
needs to be changed. Although the long-range
solution has already been devised and is called IPv6,
a short-term solution was also devised to use the
same address space but to change the distribution of
addresses to provide a fair share to each
organization. The short-term solution still uses IPv4
addresses, but it is called classless addressing.
18.68
Classless Addressing
In classless addressing, the whole address space is divided into variable
length blocks. The prefix in an address defines the block (network); the
suffix defines the node (device). Theoretically, we can have a block of 2 0,
21, 22, . . . , 232 addresses. One of the restrictions, as we discuss later, is that
the number of addresses in a block needs to be a power of 2. An
organization can be granted one block of addresses.

Figure 18.19 shows the division of the whole address space into
nonoverlapping blocks.

18.69
Classless Addressing
Unlike classful addressing, the prefix length in classless
addressing is variable. We can have a prefix length that ranges
from 0 to 32. The size of the network is inversely
proportional to the length of the prefix. A small prefix means a
larger network; a large prefix means a smaller network.

18.70
Classless Addressing
Prefix length: Slash Notation
The first question that we need to answer in classless addressing is how to
find the prefix length if an address is given. Since the prefix length is not
inherent in the address, we need to separately give the length of the prefix.
In this case, the prefix length, n, is added to the address, separated by a
slash. The notation is informally referred to as slash notation and formally
as classless interdomain routing or CIDR (pronounced cider) strategy. An
address in classless addressing can then be represented as shown in Figure
18.20.

18.71
Classless Addressing
Extracting information from an address
Given any address in the block, we normally like to know three pieces of
information about the block to which the address belongs: the number of
addresses, the first address in the block, and the last address. Since the
value of prefix length, n, is given, we can easily find these three pieces of
information, as shown in Figure 18.21.

18.72
Example 18.1
A classless address is given as 167.199.170.82/27. We can
find the above three pieces of information as follows. The
number of addresses in the network is 232− n = 25 = 32
addresses. The first address can be found by keeping the
first 27 bits and changing the rest of the bits to 0s.

The last address can be found by keeping the first 27 bits


and changing the rest of the bits to 1s.

18.73
Classless Addressing
Address Mask

18.74
Example 18.2
We repeat Example 18.1 using the mask. The mask in
dotted-decimal notation is 256.256.256.224 The AND, OR,
and NOT operations can be applied to individual bytes using
calculators and applets at the book website.

18.75
Example 18.3
In classless addressing, an address cannot per se define the
block the address belongs to. For example, the address
230.8.24.56 can belong to many blocks. Some of them are
shown below with the value of the prefix associated with
that block.

18.76
Classless Addressing
Network address
The above examples show that, given any address, we can find all
information about the block. The first address, the network address, is
particularly important because it is used in routing a packet to its
destination network. For the moment, let us assume that an internet is made
of m networks and a router with m interfaces. When a packet arrives at the
router from any source host, the router needs to know to which network
the packet should be sent: from which interface the packet should be sent
out. When the packet arrives at the network, it reaches its destination host
using another strategy that we discuss later.

18.77
Classless Addressing
Network address
Figure 18.22 shows the idea. After the network address has been found, the
router consults its forwarding table to find the corresponding interface from
which the packet should be sent out. The network address is actually the
identifier of the network; each network is identified by its network address.

18.78
Classless Addressing
Block allocation
The next issue in classless addressing is block allocation. How are the
blocks allocated? The ultimate responsibility of block allocation is given to
a global authority called the Internet Corporation for Assigned Names and
Numbers (ICANN). However, ICANN does not normally allocate
addresses to individual Internet users. It assigns a large block of addresses
to an ISP (or a larger organization that is considered an ISP in this case).
For the proper operation of the CIDR, two restrictions need to be applied to
the
allocated block.

18.79
Example 18.4
An ISP has requested a block of 1000 addresses. Since 1000
is not a power of 2, 1024 addresses are granted. The prefix
length is calculated as n = 32 − log21024 = 22. An available
block, 18.14.12.0/22, is granted to the ISP. It can be seen
that the first address in decimal is 302,910,464, which is
divisible by 1024.

18.80
Classless Addressing
Subnetting
More levels of hierarchy can be created using subnetting. An organization
(or an ISP) that is granted a range of addresses may divide the range into
several subranges and assign each subrange to a subnetwork (or subnet).
Note that nothing stops the organization from creating more levels. A
subnetwork can be divided into several sub-subnetworks. A sub-
subnetwork can be divided into several sub-sub-subnetworks, and so on.

18.81
Classless Addressing
Designing subnets

18.82
Example 18.5
An organization is granted a block of addresses with the
beginning address 14.24.74.0/24. The organization needs to
have 3 subblocks of addresses to use in its three subnets:
one subblock of 10 addresses, one subblock of 60 addresses,
and one subblock of 120 addresses. Design the subblocks.

Solution
There are 232– 24 = 256 addresses in this block. The first
address is 14.24.74.0/24; the last address is 14.24.74.255/24.
To satisfy the third requirement, we assign addresses to
subblocks, starting with the largest and ending with the
smallest one.

18.83
Example 18.5 (continued)
a. The number of addresses in the largest subblock, which
requires 120 addresses, is not a power of 2. We allocate 128
addresses. The subnet mask for this subnet can be found as
n1 = 32 − log2 128 = 25. The first address in this block is
14.24.74.0/25; the last address is 14.24.74.127/25.

b. The number of addresses in the second largest subblock,


which requires 60 addresses, is not a power of 2 either. We
allocate 64 addresses. The subnet mask for this subnet can
be found as n2 = 32 − log2 64 = 26. The first address in this
block is 14.24.74.128/26; the last address is
14.24.74.191/26.

18.84
Example 18.5 (continued)
c. The number of addresses in the largest subblock, which
requires 10 addresses, is not a power of 2. We allocate 16
addresses. The subnet mask for this subnet can be found as
n1 = 32 − log2 16 = 28. The first address in this block is
14.24.74.192/28; the last address is 14.24.74.207/28.

If we add all addresses in the previous subblocks, the result


is 208 addresses, which means 48 addresses are left in
reserve. The first address in this range is 14.24.74.208. The
last address is 14.24.74.255. We don’t know about the
prefix length yet. Figure 18.23 shows the configuration of
blocks. We have shown the first address in each block.

18.85
Figure 18.23: Solution to Example 4.5

18.86
Classless Addressing
Address aggregation
One of the advantages of the CIDR strategy is address
aggregation (sometimes called address summarization or
route summarization). When blocks of addresses are combined
to create a larger block, routing can be done based on the
prefix of the larger block. ICANN assigns a large block of
addresses to an ISP. Each ISP in turn divides its assigned block
into smaller subblocks and grants the subblocks to its
customers.

18.87
Example 18.6
Figure 18.24 shows how four small blocks of addresses are
assigned to four organizations by an ISP. The ISP combines
these four blocks into one single block and advertises the
larger block to the rest of the world. Any packet destined for
this larger block should be sent to this ISP. It is the
responsibility of the ISP to forward the packet to the
appropriate organization. This is similar to routing we can
find in a postal network. All packages coming from outside
a country are sent first to the capital and then distributed to
the corresponding destination.

18.88
Figure 18.24: Example of address aggregation

18.89
Classless Addressing
Special Addresses
Before finishing the topic of addresses in IPv4, we need to
mention five special addresses that are used for special
purposes: this-host address, limited-broadcast address,
loopback address, private addresses, and multicast addresses.

18.90
Classless Addressing
Special Addresses

18.91
18.4.4 DHCP
We have seen that a large organization or an ISP can receive a block of
addresses directly from ICANN and a small organization can receive a
block of addresses from an ISP. After a block of addresses are assigned to
an organization, the network administration can manually assign addresses
to the individual hosts or routers. However, address assignment in an
organization can be done automatically using the Dynamic Host
Configuration Protocol (DHCP). DHCP is an application-layer program,
using the client-server paradigm, that actually helps TCP/IP at the network
layer.

DHCP can be used in many situations. A network manager can configure


DHCP to assign permanent IP addresses to the host and routers. DHCP can
also be configured to provide temporary, on demand, IP addresses to hosts.
The second capability can provide a temporary IP address to a traveler to
connect her laptop to the Internet while she is staying in the hotel. It also
allows an ISP with 1000 granted addresses to provide services to 4000
households, assuming not more than one-forth of customers use the
Internet at the same time.

18.92
DHCP

In addition to its IP address, a computer also needs to know the network


prefix (or address mask). Most computers also need two other pieces of
information, such as the address of a default router to be able to
communicate with other networks and the address of a name server to be
able to use names instead of addresses, as we will see in Chapter 26. In
other words, four pieces of information are normally needed: the computer
address, the prefix, the address of a router, and the IP address of a name
server. DHCP can be used to provide these pieces of information to the
host.

18.93
DHCP

DHCP message format


DHCP is a client-server protocol in which the client sends a request
message and the server returns a response message. Before we discuss the
operation of DHCP, let us show the general format of the DHCP message
in Figure 18.25. Most of the fields are explained in the figure, but we need
to discuss the option field, which plays a very important role in DHCP.

18.94
DHCP
DHCP message format
The 64-byte option field has a dual purpose. It can carry either additional
information or some specific vendor information. The server uses a
number, called a magic cookie, in the format of an IP address with the
value of 99.130.83.99. When the client finishes reading the message, it
looks for this magic cookie. If present, the next 60 bytes are options. An
option is composed of three fields: a 1-byte tag field, a 1-byte length field,
and a variable-length value field. There are several tag fields that are
mostly used by vendors. If the tag field is 53, the value field defines one of
the 8 message types shown in Figure 18.26. We show how these message
types are used by DHCP.

18.95
Figure 18.27 shows a simple scenario

18.96
DHCP
DHCP operation

18.97
DHCP
DHCP operation

18.98
DHCP
Two well-known ports
We said that the DHCP uses two well-known ports (68 and 67) instead of
one well-known and one ephemeral. The reason for choosing the well-
known port 68 instead of an ephemeral port for the client is that the
response from the server to the client is broadcast. Remember that an IP
datagram with the limited broadcast message is delivered to every host on
the network. Now assume that a DHCP client and a DAYTIME client, for
example, are both waiting to receive a response from their corresponding
server and both have accidentally used the same temporary port number
(56017, for example). Both hosts receive the response message from the
DHCP server and deliver the message to their clients. The DHCP client
processes the message; the DAYTIME client is totally confused
with a strange message received. Using a well-known port number prevents
this problem from happening. The response message from the DHCP
server is not delivered to the DAYTIME client, which is running on the
port number 56017, not 68. The temporary port numbers are selected from
a different range than the well-known port numbers. The curious reader
may ask what happens if two DHCP clients are running at the
18.99 same time. This can happen after a power failure and power restoration. In
DHCP
Two well-known ports
You may ask what happens if two DHCP clients are running at the
same time. This can happen after a power failure and power restoration. In
this case the messages can be distinguished by the value of the transaction
ID, which separates each response from the other.

18.100
DHCP
Using FTP
The server does not send all of the information that a client may need for
joining the network. In the DHCPACK message, the server defines the
pathname of a file in which the client can find complete information such
as the address of the DNS server. The client can then use a file transfer
protocol to obtain the rest of the needed information.

18.101
DHCP
Error control
DHCP uses the service of UDP, which is not reliable. To provide error
control, DHCP uses two strategies. First, DHCP requires that UDP use the
checksum. As we will see in Chapter 24, the use of the checksum in UDP is
optional. Second, the DHCP client uses timers and a retransmission policy
if it does not receive the DHCP reply to a request. However, to prevent a
traffic jam when several hosts need to retransmit a request (for example,
after a power failure), DHCP forces the client to use a random number to
set its timers.

18.102
DHCP
Transition states
The previous scenarios we discussed for the operation of the DHCP were
very simple. To provide dynamic address allocation, the DHCP client acts
as a state machine that performs transitions from one state to another
depending on the messages it receives or sends. Figure 18.28 shows the
transition diagram with the main states.

18.103
DHCP
Transition states
When the DHCP client first starts, it is in the INIT state (initializing state).
The client broadcasts a discover message.

When it receives an offer, the client goes to the SELECTING state. While
it is there, it may receive more offers.

After it selects an offer, it sends a request message and goes to the


REQUESTING state.

If an ACK arrives while the client is in this state, it goes to the BOUND
state and uses the IP address.

18.104
DHCP
Transition states
When the lease is 50 percent expired, the client tries to renew it by moving
to the RENEWING state. If the server renews the lease, the client moves to
the BOUND state again.

If the lease is not renewed and the lease time is 75 percent expired, the
client moves to the REBINDING state. If the server agrees with the lease
(ACK message arrives), the client moves to the BOUND state and
continues using the IP address; otherwise, the client moves to the INIT
state and requests another IP address.

Note that the client can use the IP address only when it is in the BOUND,
RENEWING, or REBINDING state.

The above procedure requires that the client uses three timers: renewal
timer (set to 50 percent of the lease time), rebinding timer (set to 75 percent
of the lease time), and expiration timer (set to the lease time).

18.105
18.4.5 NAT

In most situations, however, only a portion of computers in a small network


need access to the Internet simultaneously. This means that the number
of allocated addresses does not have to match the number of computers in
the network.

For example, assume that in a small business with 20 computers the


maximum number of computers that access the Internet simultaneously is
only 4. Most of the computers are either doing some task that does not need
Internet access or communicating with each other. This small business can
use the TCP/IP protocol for both internal and universal communication.
The business can use 20 (or 25) addresses from the private
block addresses (discussed before) for internal communication; five
addresses for universal communication can be assigned by the ISP.

18.106
NAT
A technology that can provide the mapping between the private and
universal addresses, and at the same time support virtual private networks,
is Network Address Translation (NAT). The technology allows a site to
use a set of private addresses for internal communication and a set of global
Internet addresses (at least one) for communication with the rest of the
world. The site must have only one connection to the global Internet
through a NAT-capable router that runs NAT software. Figure 18.29 shows
a simple implementation of NAT.

18.107
NAT

As the figure shows, the private network uses private addresses. The router
that connects the network to the global address uses one private address
and one global address. The private network is invisible to the rest of the
Internet; the rest of the Internet sees only the NAT router with the address
200.24.5.8.

18.108
NAT
Address translation
All of the outgoing packets go through the NAT router, which replaces the
source address in the packet with the global NAT address. All incoming
packets also pass through the NAT router, which replaces the destination
address in the packet (the NAT router global address) with the appropriate
private address. Figure 18.30 shows an example of address translation.

Translating the source addresses for an outgoing packet is straightforward.


But how does the NAT router know the destination address for a packet
coming from the Internet? There may be tens or hundreds of private IP
addresses, each belonging to one specific host. The problem is solved if the
NAT router has a translation table.
18.109
NAT
Using one IP address
In its simplest form, a translation table has only two columns: the private
address and the external address (destination address of the packet). When
the router translates the source address of the outgoing packet, it also
makes note of the destination address— where the packet is going. When
the response comes back from the destination, the router uses the source
address of the packet (as the external address) to find the private address of
the packet. Figure 18.31 shows the idea.

18.110
NAT
Using a pool of IP addresses
The use of only one global address by the NAT router allows only one
private-network host to access a given external host. To remove this
restriction, the NAT router can use a pool of global addresses.

For example, instead of using only one global address (200.24.5.8), the
NAT router can use four addresses (200.24.5.8, 200.24.5.9, 200.24.5.10,
and 200.24.5.11). In this case, four private-network hosts can communicate
with the same external host at the same time because each pair of addresses
defines a separate connection.

However, there are still some drawbacks. For example, two private-
network hosts cannot access the same external server program (e.g., HTTP
or TELNET) at the same time. –Using both IP addresses and port
addresses.

18.111
NAT
Using both IP addresses and port addresses
To address the previous drawback, we need more information in the
translation table. For example, suppose two hosts inside a private network
with addresses 172.18.3.1 and 172.18.3.2 need to access the HTTP server
on external host 25.8.3.2. If the translation table has five columns, instead
of two, that include the source and destination port addresses and the
transport-layer protocol, the ambiguity is eliminated. Table 18.1 shows an
example of such a table.

18.112
NAT
Using both IP addresses and port addresses
Note that when the response from HTTP comes back, the combination of
source address (25.8.3.2) and destination port address (1401) defines the
private network host to which the response should be directed. Note also
that for this translation to work, the ephemeral port addresses (1400 and
1401) must be unique.

18.113
18-5 FORWARDING OF IP PACKETS

We discussed the concept of forwarding at the network layer


earlier in this chapter. In this section, we extend the concept to
include the role of IP addresses in forwarding. As we discussed
before, forwarding means to place the packet in its route to its
destination. Since the Internet today is made of a combination of
links (networks), forwarding means to deliver the packet to the
next hop (which can be the final destination or the intermediate
connecting device).

When IP is used as a connectionless protocol, forwarding is


based on the destination address of the IP datagram; when the IP
is used as a connection-oriented protocol, forwarding is based
on the label attached to an IP datagram.
18.114
18.5.1 Destination Address Forwarding

We first discuss forwarding based on the destination


address. This is a traditional approach, which is
prevalent today. In this case, forwarding requires a
host or a router to have a forwarding table. When a
host has a packet to send or when a router has
received a packet to be forwarded, it looks at this
table to find the next hop to deliver the packet to.

18.115
Destination Address Forwarding

A classless forwarding table needs to include four pieces of information:


the
mask, the network address, the interface number, and the IP address of the
next router (needed to find the link-layer address of the next hop, we will
learn in Chapter 9). However, we often see in the literature that the first two
pieces are combined. For example, if n is 26 and the network address is
180.70.65.192, then one can combine the two as one piece of information:
180.70.65.192/26. Figure 18.32 shows a simple forwarding module and
forwarding table for a router with only three interfaces.

18.116
Destination Address Forwarding

The job of the forwarding module is to search the table, row by row.

In each row, the n leftmost bits of the destination address (prefix) are kept
and the rest of the bits (suffix) are set to 0s.

If the resulting address (which we call the network address), matches with
the address in the first column, the information in the next two columns is
extracted; otherwise the search continues.

Normally, the last row has a default value in the first column (not shown in
the figure), which indicates all destination addresses that did not match the
previous rows.

18.117
Destination Address Forwarding

The job of the forwarding module is to search the table, row by row.

In each row, the n leftmost bits of the destination address (prefix) are kept
and the rest of the bits (suffix) are set to 0s.

If the resulting address (which we call the network address), matches with
the address in the first column, the information in the next two columns is
extracted; otherwise the search continues.

Normally, the last row has a default value in the first column (not shown in
the figure), which indicates all destination addresses that did not match the
previous rows.

18.118
Destination Address Forwarding

Example 18.7: Make a forwarding table for router R1 using the


configuration in Figure 18.33.

18.119
Destination Address Forwarding

Example 18.7--solution

18.120
Destination Address Forwarding

Example 18.9: Show the forwarding process if a packet arrives at R1 in


Figure 18.33 with the destination address 180.70.65.140.

Solution
The router performs the following steps:
1. The first mask (/26) is applied to the destination address. The result is
180.70.65.128, which does not match the corresponding network address.
2. The second mask (/25) is applied to the destination address. The result is
180.70.65.128, which matches the corresponding network address. The next-hop
address and the interface number m0 are extracted for forwarding the packet.

18.121
Destination Address Forwarding
Address aggregation
Using address aggregation, the number of forwarding table entries can be
reduced.

For example, R1 is connected to networks of four organizations that each


use 64 addresses. R2 is somewhere far from R1.

18.122
Destination Address Forwarding
Address aggregation
R1 has a longer forwarding table because each packet must be
correctly routed to the appropriate organization. R2, on the other hand, can
have a very small forwarding table. For R2, any packet with destination
140.24.7.0 to 140.24.7.255 is sent out from interface m0 regardless of the
organization number. This is called address aggregation because the blocks
of addresses for four organizations are aggregated into one larger block. R2
would have a longer forwarding table if each organization had addresses
that could not be aggregated into one block.

18.123
Destination Address Forwarding
Longest mask matching
What happens if one of the organizations in the previous figure is not
geographically close to the other three? For example, if organization 4
cannot be connected to router R1 for some reason, can we still use the idea
of address aggregation and still assign block 140.24.7.192/26 to
organization 4? The answer is yes, because routing in classless addressing
uses another principle, longest mask matching. This principle states
that the forwarding table is sorted from the longest mask to the shortest
mask. In other words, if there are three masks, /27, /26, and /24, the mask /
27 must be the first entry and /24 must be the last. Let us see if this
principle solves the situation in which organization 4 is separated from the
other three organizations.

Figure 18.35 shows the situation.

18.124
Destination Address Forwarding
Longest mask matching
Suppose a packet arrives at router R2 for organization 4 with destination
address 140.24.7.200. The first mask at router R2 is applied, which gives
the network address 140.24.7.192. The packet is routed correctly from
interface m1 and reaches organization 4. If, however, the forwarding table
was not stored with the longest prefix first, applying the /24 mask would
result in the incorrect routing of the packet to router R1.

18.125
Destination Address Forwarding
Hierarchical routing
To solve the problem of gigantic forwarding tables, we can create a sense
of hierarchy in the forwarding tables.

Let us take the case of a local ISP. A local ISP can be assigned a single, but
large, block of addresses with a certain prefix length. The local ISP can
divide this block into smaller blocks of different sizes, and assign these to
individual users and organizations, both large and small. If the block
assigned to the local ISP starts with a.b.c.d/n, the ISP can create
blocks starting with e.f.g.h/m, where m may vary for each customer and is
greater than n.

18.126
Destination Address Forwarding
Hierarchical routing
How does this reduce the size of the forwarding table? The rest of the
Internet does not have to be aware of this division. All customers of the
local ISP are defined as a.b.c.d/n to the rest of the Internet. Every packet
destined for one of the addresses in this large block is routed to the local
ISP. There is only one entry in every router in the world for all of these
customers. They all belong to the same group. Of course, inside the local
ISP, the router must recognize the subblocks and route the packet to the
destined customer. If one of the customers is a large organization, it also
can create another level of hierarchy by subnetting and dividing its
subblock into smaller subblocks (or sub-subblocks). In classless routing,
the levels of hierarchy are unlimited as long as we follow the rules of
classless addressing.

18.127
Destination Address Forwarding
Hierarchical routing
Example 18.10: As an example of hierarchical routing, let us consider
Figure 18.36. A regional ISP is granted 16,384 addresses starting from
120.14.64.0. The regional ISP has decided to divide this block into four
subblocks, each with 4096 addresses. Three of these subblocks are assigned
to three local ISPs; the second subblock is reserved for future use. Note that
the mask for each block is /20 because the original block with mask /18 is
divided into 4 blocks.

18.128
Destination Address Forwarding
Example 18.10: The first local ISP has divided its assigned subblock into 8
smaller blocks and assigned each to a small ISP. Each small ISP provides
services to 128 households (H001 to H128), each using four addresses.
Note that the mask for each small ISP is now /23 because the block is
further divided into 8 blocks. Each household has a mask of /30, because a
household has only 4 addresses (232 – 30 = 4). The second local ISP has
divided its block into 4 blocks and has assigned the addresses to 4 large
organizations (LOrg01 to LOrg04). Note that each large organization has
1024 addresses and the mask is /22.

18.129
Destination Address Forwarding
Example 18.10: The third local ISP has divided its block into 16 blocks
and assigned each block to a small organization (SOrg01 to SOrg16). Each
small organization has 256 addresses and the mask is /24. There is a sense
of hierarchy in this configuration. All routers in the Internet send a packet
with destination address 120.14.64.0 to 120.14.127.255 to the regional ISP.
The regional ISP sends every packet with destination address 120.14.64.0
to 120.14.79.255 to Local ISP1. Local ISP1 sends every packet with
destination address 120.14.64.0 to 120.14.64.3 to H001.

18.130

You might also like