Transport Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Unit 3 Transport Layer

Working of Transport Layer

The transport layer takes services from the Application layer and provides services
to the Network layer.

At the sender’s side: The transport layer receives data (message) from the
Application layer and then performs Segmentation, divides the actual message into
segments, adds the source and destination’s port numbers into the header of the
segment, and transfers the message to the Network layer.

At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and
forwards the message to the appropriate port in the Application layer.

Responsibilities of a Transport Layer


 The Process to Process Delivery
 End-to-End Connection between Hosts
 Multiplexing and Demultiplexing
 Congestion Control
 Data integrity and Error correction
 Flow control

1. The Process to Process Delivery

While Data Link Layer requires the MAC address (48 bits address contained inside
the Network Interface Card of every host machine) of source-destination hosts to
correctly deliver a frame and the Network layer requires the IP address for
appropriate routing of packets, in a similar way Transport Layer requires a Port
number to correctly deliver the segments of data to the correct process amongst
the multiple processes running on a particular host. A port number is a 16-bit
address used to identify any client-server program uniquely.
2. End-to-end Connection between Hosts

The transport layer is also responsible for creating the end-to-end Connection
between hosts for which it mainly uses TCP and UDP. TCP is a secure, connection-
orientated protocol that uses a handshake protocol to establish a robust connection
between two end hosts. TCP ensures the reliable delivery of messages and is used
in various applications. UDP, on the other hand, is a stateless and unreliable
protocol that ensures best-effort delivery. It is suitable for applications that have
little concern with flow or error control and requires sending the bulk of data like
video conferencing. It is often used in multicasting protocols.

3. Multiplexing and Demultiplexing

Multiplexing (many to one) is when data is acquired from several processes from
the sender and merged into one packet along with headers and sent as a single
packet. Multiplexing allows the simultaneous use of different processes over a
network that is running on a host. The processes are differentiated by their port
numbers. Similarly, Demultiplexing(one to many is required at the receiver side
when the message is distributed into different processes. Transport receives the
segments of data from the network layer distributes and delivers it to the
appropriate process running on the receiver’s machine.

4. Congestion Control

Congestion is a situation in which too many sources over a network attempt to send
data and the router buffers start overflowing due to which loss of packets occurs.
As a result, the retransmission of packets from the sources increases the congestion
further. In this situation, the Transport layer provides Congestion Control in
different ways. It uses open-loop congestion control to prevent congestion and
closed-loop congestion control to remove the congestion in a network once it
occurred. TCP provides AIMD – additive increases multiplicative decrease and leaky
bucket technique for congestion control.
5. Data integrity and Error Correction

The transport layer checks for errors in the messages coming from the application
layer by using error detection codes, and computing checksums, it checks whether
the received data is not corrupted and uses the ACK and NACK services to inform
the sender if the data has arrived or not and checks for the integrity of data.

6. Flow Control
The transport layer provides a flow control mechanism between the adjacent layers
of the TCP/IP model. TCP also prevents data loss due to a fast sender and slow
receiver by imposing some flow control techniques. It uses the method of sliding
window protocol which is accomplished by the receiver by sending a window back
to the sender informing the size of data it can receive

Multiplexing-

Multiplexing is the process of collecting the data from multiple application processes
of the sender, enveloping that data with headers and sending them as a whole to the
intended receiver.

 In Multiplexing at the Transport Layer, the data is collected from various


application processes. These segments contain the source port number,
destination port number, header files, and data.
 These segments are passed to the Network Layer which adds the source and
destination IP address to get the datagram.

Demultiplexing-

Delivering the received segments at the receiver side to the correct app layer
processes is called demultiplexing.

 The destination host receives the IP datagrams; each datagram has a source
IP address and a destination IP address.
 Each datagram carries 1 transport layer segment.
 Each segment has the source and destination port number.
 The destination host uses the IP addresses and port numbers to direct the
segment to the appropriate socket.
Multiplexing and demultiplexing are just concepts that describe the process of the
transmission of data generated by different applications simultaneously. When the
data arrives at the Transport layer, each data segment is independently processed
and sent to its appropriate application in the destination machine.

The main objective of multiplexing and demultiplexing is to allow us to use a


multitude of applications simultaneously.

 The above figure shows that the source computer is using Google, Outlook,
and Chat applications at the same time.
 All the data is forwarded to a destination computer.
 Each application has a segment put on a wire to be transmitted. It signifies
that all applications are running simultaneously.
 Without multiplexing/demultiplexing exists, a user can use only one
application at a time because only the segments of that application are put
on the wire and transmitted. For clarification, see the figure below −
In the above figure, the Application layer has generated data, and then passed it
down to the Transport layer to be segmented.

 After segmenting the data, port numbers are given to each segment to be
ready for transmission.
 Then the segments are put on a wire to travel across the network to the
destination. This process is called "multiplexing".
 When the transmitted segments reach the Transport layer of the destination,
they are automatically sent up to their appropriate applications. This process
is called "demultiplexing".

TCP

o TCP stands for Transmission Control Protocol.


o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established
between both the ends of the transmission. For creating the connection, TCP
generates a virtual circuit between sender and receiver for the duration of a
transmission.

Features Of TCP protocol


o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP segments
and then passed it to the IP layer for transmission to the destination. TCP itself
segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is not
received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the
sender indicating the number the bytes it can receive without overflowing its
internal buffer. The number of bytes is sent in ACK in the form of the highest
sequence number that it can receive without any problem. This mechanism is
also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different
computers. At the receiving end, the data is forwarded to the correct
application. This process is known as demultiplexing. TCP transmits the packet
to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and
window sizes, is called a logical connection. Each connection is identified by
the pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should
have sending and receiving buffers so that the segments can flow in both the
directions. TCP is a connection-oriented protocol. Suppose the process A wants
to send and receive the data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.
TCP Segment Format

Where,

o Source port address: It is used to define the address of the application


program in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments.
The 32-bit sequence number field represents the position of the data in an
original data stream.
o Acknowledgement number: A 32-field acknowledgement number
acknowledge the data from other communicating devices. If ACK field is set to
1, then it specifies the sequence number that the receiver is expecting to
receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
The minimum size of the header is 5 words, and the maximum size of the
header is 15 words. Therefore, the maximum size of the TCP header is 60 bytes,
and the minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and
independently. A control bit defines the use of a segment or serves as a validity
check for other fields.

There are total six types of flags in control field:

o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is
needed so if possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any
confusion occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types
of segments: connection request, connection confirmation ( with the ACK bit
set ), and confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender
has finished sending data. It is used in connection termination in three types
of segments: termination request, termination confirmation, and
acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the
window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset
from the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the
additional information to the receiver.

Working of TCP

In TCP, the connection is established by using three-way handshaking. The client


sends the segment with its sequence number. The server, in return, sends its segment
with its own sequence number as well as the acknowledgement sequence, which is
one more than the client sequence number. When the client receives the
acknowledgment of its segment, then it sends the acknowledgment to the server. In
this way, the connection is established between the client and the server.
Advantages of TCP

o It provides a connection-oriented reliable service, which means that it


guarantees the delivery of data packets. If the data packet is lost across the
network, then the TCP will resend the lost packets.
o It provides a flow control mechanism using a sliding window protocol.
o It provides error detection by using checksum and error control by using Go
Back or ARP protocol.
o It eliminates the congestion by using a network congestion avoidance
algorithm that includes various schemes such as additive
increase/multiplicative decrease (AIMD), slow start, and congestion window.

Disadvantage of TCP

It increases a large amount of overhead as each segment gets its own TCP header, so
fragmentation by the router increases the overhead.
Differences b/w TCP & UDP

Basis for TCP UDP


Comparison

Definition TCP establishes a UDP transmits the data


virtual circuit before directly to the destination
transmitting the computer without verifying
data. whether the receiver is
ready to receive or not.

Connection Type It is a Connection- It is a Connectionless


Oriented protocol protocol

Speed slow high

Reliability It is a reliable It is an unreliable protocol.


protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the It neither takes the


acknowledgement of acknowledgement, nor it
data and has the retransmits the damaged
ability to resend the frame.
lost packets.

User Datagram Protocol

o Connectionless

The UDP is a connectionless protocol as it does not create a virtual path to transfer
the data. It does not use the virtual path, so packets are sent in different paths
between the sender and the receiver, which leads to the loss of packets or received
out of order.

Ordered delivery of data is not guaranteed.

In the case of UDP, the datagrams are sent in some order will be received in the same
order is not guaranteed as the datagrams are not numbered.

o Ports

The UDP protocol uses different port numbers so that the data can be sent to the
correct destination. The port numbers are defined between 0 and 1023.

o Faster transmission

UDP enables faster transmission as it is a connectionless protocol, i.e., no virtual path


is required to transfer the data. But there is a chance that the individual packet is lost,
which affects the transmission quality. On the other hand, if the packet is lost in TCP
connection, that packet will be resent, so it guarantees the delivery of the data
packets.

o Acknowledgment mechanism

The UDP does have any acknowledgment mechanism, i.e., there is no handshaking
between the UDP sender and UDP receiver. If the message is sent in TCP, then the
receiver acknowledges that I am ready, then the sender sends the data. In the case
of TCP, the handshaking occurs between the sender and the receiver, whereas in
UDP, there is no handshaking between the sender and the receiver.

o Segments are handled independently.

Each UDP segment is handled individually of others as each segment takes different
path to reach the destination. The UDP segments can be lost or delivered out of order
to reach the destination as there is no connection setup between the sender and the
receiver.

o Stateless
It is a stateless protocol that means that the sender does not get the
acknowledgement for the packet which has been sent.

Why do we require the UDP protocol?

As we know that the UDP is an unreliable protocol, but we still require a UDP protocol
in some cases. The UDP is deployed where the packets require a large amount of
bandwidth along with the actual data. For example, in video streaming,
acknowledging thousands of packets is troublesome and wastes a lot of bandwidth.
In the case of video streaming, the loss of some packets couldn't create a problem,
and it can also be ignored.

UDP format:

The user datagram has a 16-byte header which is shown below:

The UDP header contains four fields:

o Source port number: It is 16-bit information that identifies which port is going
t send the packet.
o Destination port number: It identifies which port is going to accept the
information. It is 16-bit information which is used to identify application-level
service on the destination machine.
o Length: It is 16-bit field that specifies the entire length of the UDP packet that
includes the header also. The minimum value would be 8-byte as the size of
the header is 8 bytes.
o Checksum: It is a 16-bits field, and it is an optional field. This checksum field
checks whether the information is accurate or not as there is the possibility
that the information can be corrupted while transmission. It is an optional
field, which means that it depends upon the application, whether it wants to
write the checksum or not. If it does not want to write the checksum, then all
the 16 bits are zero; otherwise, it writes the checksum. In UDP, the checksum
field is applied to the entire packet, i.e., header as well as data part whereas,
in IP, the checksum field is applied to only the header field.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a


transmission.
o It does not provide any sequencing or reordering functions and does not
specify the damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which
packet has been lost as it does not contain an ID or sequencing number of a
particular data segment.

Traffic Shaping:

It is a mechanism to control the amount and the rate of the traffic sent

to the network. Two techniques can shape traffic: Leaky bucket and

Token bucket.

Leaky bucket algorithm:

 Each host is connected to the network by an interface containing a


leaky bucket, a finite internal queue.

 If a packet arrives at the queue when it is full, the packet is discarded.


 In fact, it is nothing other than a single server queuing system with constant
service time.

Token bucket algorithm:

 The leaky bucket holds tokens, generated by a clock at the rate of one token
every T second.

 For a packet to be transmitted, it must capture and destroy one token.

 The token bucket algorithm allows idle hosts to save up permission to the
maximum size of bucket n for burst traffic latter.
Queuing Techniques for Scheduling:
QoS traffic scheduling is a scheduling methodology of network traffic based upon
QoS (Quality of Service).

Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service. Major
scheduling techniques are: FIFO Queuing, Priority Queuing and Weight Fair
Queuing.

FIFO Queuing:

In first- in first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be discarded.
A FIFO queue is familiar to those who have had to waitfor a bus at a bus stop.
Introduction to Ports and Sockets: (Port Addressing/Socket Addressing)
The IP address and the physical address are necessary for a quantity of data to travel
from a source to the destination host. However, arrival at the destination host is not
the final objective of data communications on the Internet. A system that sends
nothing but data from one computer to another is not complete.
Today, computers are devices that can run multiple processes at the same time.
The end objective of Internet communication is a process communicating with
another process.
For example, computer A can communicate with computer C by using TELNET. At
the same time, computer A communicates with computer B by using the File
Transfer Protocol (FTP). For these processes to receive data simultaneously, we
need a method to label the different processes.
In other words, they need addresses. In the TCP/IP architecture, the label assigned
to a process is called a port address. A port address in TCP/IP is 16 bits in length.
Source and destination addresses are found in the IP packet, belonging to the
network layer. A transport layer datagram or segment that uses port numbers is
wrapped into an IP packet and transported by it.
The network layer uses the IP packet information to transport the packet across
the network (routing). Arriving at the destination host, the host's IP stack uses the
transport layer information (port number) to pass the information to the
application.

Port Number Assignment

21 File Transfer Protocol (FTP)

23 TELNET remote login

25 SMTP

80 HTTP

53 DNS
IP address and Port Number is combinedly called Socket Address that identifies
the host along with the networking application running in the host.
A socket is one endpoint of a two-way communication link between two programs
running on the network. A socket is bound to a port number so that the TCP layer
can identify the application that data is destined to be sent to. An endpoint is a
combination of an IP address and a port number.

Socket Programming:

A typical network application consists of a pair of programs—a client program and


a server program— residing in two different end systems. When these two
programs are executed, a client process and a server process are created, and these
processes communicate with each other by reading from, and writing to, sockets.
When creating a network application, the developer’s main task is therefore to
write the code for both the client and server programs, called socket programming.
There are two types of network applications. One type is an implementation whose
operation is specified in a protocol standard, such as an RFC or some other
standards document; such an application is sometimes referred to as “open,” since
the rules specifying its operation are known to all. For such an implementation, the
client and server programs must conform to the rules dictated by the RFC.

The other type of network application is a proprietary network application. In this


case the client and server programs employ an application-layer protocol that has
not been openly published in an RFC or elsewhere. A single developer (or
development team) creates both the client and server programs, and the developer
has complete control over what goes in the code. But because the code does not
implement an open protocol, other independent developers will not be able to
develop code that interoperates with the application.

During the development phase, one of the first decisions the developer must make
is whether the application is to run over TCP or over UDP.
When a web page is opened, automatically a socket program is initialized to
receive/send to the process. The socket program at the source communicates with
the socket program at the destination machine with the associated source
port/destination port numbers. When a web page is terminated, automatically the
socket programs will be terminated.
The following sequence of events occur in the client-server application using socket
programming for both TCP and UDP:
The client reads a line of characters (data) from its keyboard and sends the data to
the server.
The server receives the data and converts the characters to uppercase.
The server sends the modified data to the client.
The client receives the modified data and displays the line on its screen.
BSD Socket API is the well-known socket programming API that defines a set of standard
function calls made available at the application level.
BSD- Berkeley Software Distribution, API-Applications Programming Interface
These functions allow programmers to include Internet communications capabilities in
their products. BSD Sockets generally relies upon client/server architecture. For TCP
communications, one host listens for incoming connection requests. When a request
arrives, the server host will accept it, at which point data can be transferred between the
hosts. UDP is also allowed to establish a connection, though it is not required. Data can
simply be sent to or received from a host. The Sockets API makes use of two mechanisms
to deliver data to the application level: ports and sockets.

You might also like