Transport Layer

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 55

FUNCTIONS OF TRANSPORT LAYER:

Service Point Addressing : Transport Layer header includes service point


address which is port address. This layer gets the message to the correct
process on the computer unlike Network Layer, which gets each packet to
the correct computer.
Segmentation and Reassembling : A message is divided into segments;
each segment contains sequence number, which enables this layer in
reassembling the message. Message is reassembled correctly upon arrival
at the destination and replaces packets which were lost in transmission.
Connection Control : It includes 2 types :
Connectionless Transport Layer : Each segment is considered as an independent
packet and delivered to the transport layer at the destination machine.
Connection Oriented Transport Layer : Before delivering packets, connection is
made with transport layer at the destination machine.
Flow Control : In this layer, flow control is performed end to end.
Error Control : Error Control is performed end to end in this layer to
ensure that the complete message arrives at the receiving transport layer
without any error. Error Correction is done through retransmission.
• THE TRANSPORT SERVICE:
Services provided to the upper layers:
The ultimate goal of the transport layer is to
provide efficient, reliable, and cost-effective data
transmission service to its users, normally
processes in the application layer. To achieve this,
the transport layer makes use of the software
and/or hardware is called the transport entity.
The transport entity can be located in the
operating system kernel, in a library package
bound into network applications
Transport layer have two services towards upper layer:
connection-oriented transport service and
connection-less transport service and
The connection-oriented transport service is similar to the
connection-oriented network service in many ways.
In both cases, connections have three phases: establishment, data
transfer, and release. Addressing and flow control are also
similar in both layers
the connectionless transport service is also very similar to the
connectionless network service. However, note that it can be
difficult to provide a connectionless transport service on top of a
connection-oriented network service, since it is inefficient to set
up a connection to send a single packet and then tear (meaning
run/rip/rush) it down immediately afterwards.
• To allow users to access the transport service, the transport layer must provide
some operations to application programs, that is, a transport service interface.
 
• The transport service is similar to the network service, but there are also
some important differences.
• The main difference is that the network service is intended to model the
service offered by real networks and all. Real networks can lose packets, so the
network service is generally unreliable.
•  The connection-oriented transport service, in contrast, is reliable. Of course,
real networks are not error-free, but that is precisely the purpose of the
transport layer—to provide a reliable service on top of an unreliable network.
• A second difference between the network service and transport service is
whom the services are intended for. The network service is used only by the
transport entities. Few users write their own transport entities, and thus few
users or programs ever (meaning always/forever/still) see the bare network
service.
 
• Berkeley Sockets: these socket primitives are
used for TCP.
ELEMENTS OF TRANSPORT PROTOCOLS:
There are five elements:
1.Addressing
2.Connection establishment
3.Connection release
4. Flow control and buffering
5. Multiplexing
6.Crash recovery

Addressing: When an application (e.g., a user) process wishes to set up a connection to a


remote application process, it must specify address of it.

• In the Internet, these endpoints are called ports. We will use the generic term TSAP
(Transport Service Access Point) to mean a specific endpoint in the transport layer. The
analogous endpoints in the network layer (i.e., network layer addresses) are naturally
called NSAPs (Network Service Access Points). IP addresses are examples of NSAPs.
Figure 4.4: TSAPs, NSAPs, and Transport connections
• A possible scenario for a transport connection is as follows:
 
 A mail server process attaches itself to TSAP 1522 on host 2 to wait for an
incoming call. A call such as our LISTEN might be used, for example.

 An application process on host 1 wants to send an email message, so it


attaches itself to TSAP 1208 and issues a CONNECT request.
 
The request specifies TSAP 1208 on host 1 as the source and TSAP 1522 on host 2 as
the destination. This action ultimately results in a transport connection being
established between the application process and the server.
 
 The application process sends over the mail message.
 
 The mail server responds to say that it will deliver the message.
 
 The transport connection is released.
Connection Establishment:
• Establishing a connection sounds easy, but it is actually surprisingly
tricky. At first glance, it would seem sufficient for one transport entity to
just send a CONNECTION REQUEST segment to the destination and wait
for a CONNECTION ACCEPTED reply. The problem occurs when the
network can lose, delay, corrupt, and duplicate packets. This behavior
causes serious complications.
 
• Imagine a network that is so congested that acknowledgements hardly
ever get back in time and each packet times out and is retransmitted
two or three times. Suppose that the network uses datagrams inside
and that every packet follows a different route.
 
• Some of the packets might get stuck in a traffic jam inside the network
and take a long time to arrive. That is, they may be delayed in the
network and pop out much later, when the sender thought that they
had been lost.
 if sender didn’t get ack in time out ,it assumes
that packet was lost and it retransmit the
pckts. But the first transmission packets
delivered at destination later as duplicates.
• The crux (meaning root) of the problem is that
the delayed duplicates are thought to be new
packets. We cannot prevent packets from
being duplicated and delayed. But if and
when this happens, the packets must be
rejected as duplicates and not processed as
fresh packets
• One way to avoid this problem is , throwaway
transport addresses. In this approach, each
time a transport address is needed, a new
one is generated. When a connection is
released, the address is discarded and never
used again. Delayed duplicate packets then
never find their way to a transport process
and can do no damage.
Another possibility is to give each connection a
unique identifier (i.e., a sequence number
incremented for each connection established)
After each connection is released, each
transport entity can update a table listing
obsolete connections.
Whenever a connection request comes in, it can
be checked against the table to see if it
belongs to a previously released connection.
Unfortunately, this scheme has a basic flaw: it requires
each transport entity to maintain a certain amount of
history information indefinitely. But if a machine crashes
and loses its memory, it will no longer know which
connection identifiers have already been used by its
peers.
So as a solution we devise a mechanism to kill off aged
packets that are still hobbling about.
Packet lifetime can be restricted to a known maximum
using one (or more) of the following techniques:
• Restricted network design.
• Putting a hop counter in each packet.
• Time stamping each packet.
Connection Release:
• There are two styles of terminating a
connection: asymmetric release and
symmetric release.
• Asymmetric release is the way the telephone
system works: when one party hangs up, the
connection is broken.
• Symmetric release treats the connection as
two separate unidirectional connections and
requires each one to be released separately.
Asymmetric release is unexpected and may
result in data loss.
Symmetric release is as follows:
One can envision a protocol in which host 1 says
‘‘I am done. Are you done too?’’ If host 2
responds: ‘‘I am done too. Goodbye, the
connection can be safely released.’’
Figure 4.7: Four protocol scenarios for releasing a connection. (a) normal case of three-way
handshake. (b) final ACK lost. (c) response lost.
(d) response lost and subsequent DRs lost.
CRASH RECOVERY:
• Usually hosts and routers are subject to
crashes and recovery from these crashes
becomes an issue.
• If routers gets crashed, recovery is straight
forward
• But more troublesome problem is how to
recover from host crashes. In particular, it may
be desirable for clients to be able to continue
working when servers crash and quickly
reboot.
CONGESTION CONTROL:
• Controlling congestion problem is the combined responsibility
of the network and transport layers.
• Congestion occurs at routers, so it is detected at the network
layer.
• However, congestion is ultimately caused by traffic sent into
the network by the transport layer.
• The only effective way to control congestion is for the transport
protocols to send packets into the network more slowly.
In transport layer congestion control will be performed using
 Desirable Bandwidth Allocation:
 Regulating the sending rate:
 Wireless issues:
DESIRABLE BANDWIDTH ALLOCATION:
• The goal is more than to simply avoid
congestion.
• It is to find a good allocation of bandwidth to
the transport entities that are using the
network.
• A good allocation will deliver good
performance because it uses all the available
bandwidth but avoids congestion
Efficiency and Power:
• An efficient allocation of bandwidth across
transport entities will use all of the network
capacity that is available.
• However, it is not quite right to think that if
there is a 100-Mbps link, five transport
entities should get 20 Mbps each. They should
usually get less than 20 Mbps for good
performance.
Max-Min Fairness:
It addresses how to divide bandwidth between different
transport senders.
One approach give all the senders an equal fraction of the
bandwidth .
Another approach is , for ex one flow may cross three
links, and the other flows may cross one link. The three-
link flow consumes more network resources. It might
be fairer in some sense to give it less bandwidth than
the one-link flows.
So finally max min fairness says that one flow cannot be
increased without decreasing the bandwidth given to
another flow
Convergence
A final criterion is that the congestion control
algorithm converge quickly to a fair and efficient
allocation of bandwidth.
A good congestion control algorithm should rapidly
converge to the ideal operating point, and it
should track that point as it changes over time.
If the convergence is too slow, the algorithm will
never be close to the changing operating point.
If the algorithm is not stable, it may fail to
converge to the right point
Regulating the sending rate:
• The sending rate may be limited by two
factors.
The first is flow control, in the case that there is
insufficient buffering at the receiver.
The second is congestion, in the case that there
is insufficient capacity in the network.
a) fast network feeding a low-capacity receiver.
(b) a slow network feeding a high-capacity receiver.
Wireless issues:
Wireless networks lose packets all the time due
to transmission errors. To function well, the
only packet losses that the congestion control
algorithm should observe are losses due to
insufficient bandwidth, not losses due to
transmission errors. One solution to this
problem is to mask the wireless losses by
using retransmissions over the wireless link
THE INTERNET TRANSPORT PROTOCOLS:
UDP:
• UDP is The connectionless protocol.
• UDP transmits segments consisting of an 8-
byte header followed by the payload.
• The two ports serve to identify the endpoints
• When a UDP packet arrives, its payload is
handed to the process attached to the
destination port within the source and
destination machines.
: the UDP header
The source port is primarily needed when a
reply must be sent back to the source
The destination port is used to identify the
destination.
The UDP length field includes length of the
header and the data
An optional Checksum is also provided for extra
reliability. It checksums the header, the data,
and a conceptual IP pseudo header
Remote procedure call:
• In the simplest form, to call a remote procedure, the client program
must be bound with a small library procedure, called the client stub,
that represents the server procedure in the client’s address space.
• Similarly, the server is bound with a procedure called the server
stub.
 Step 1 is the client calling the client stub. This call is a local
procedure call, with the parameters pushed onto the stack in the
normal way.
 Step 2 is the client stub packing the parameters into a message and
making a system call to send the message. Packing the parameters is
called marshaling.  
 Step 3 is the operating system sending the message from the client
machine to the server machine
• Step 4 is the operating system passing the incoming
packet to the server stub.
• Finally, step 5 is the server stub calling the server
procedure with the un marshaled parameters.
Real-Time Transport Protocols
The real time apps like Internet radio, Internet
telephony, music-on-demand, video conferencing,
video-on-demand, and other multimedia
applications needs Real-Time Transport Protocol
Because RTP just uses normal UDP there are no special
guarantees about delivery, and packets may be lost,
delayed, corrupted, etc
The RTP format contains several features to help
receivers work with multimedia information.
The RTP header is illustrated in Fig. 4.13. It consists of
three 32-bit words and potentially some extensions.
• The first word contains the Version field, which is already at 2.
 
• The P bit indicates that the packet has been padded to a
multiple of 4 bytes.
 
• The X bit indicates that an extension header is present.
 
• The CC field tells how many contributing sources are present,
from 0 to 15.
 
• The M bit is an application-specific marker bit. It can be used to
mark the start of a video frame, the start of a word in an audio
channel, or something else that the application understands.
 
• The Payload type field tells which encoding algorithm has been used
(e.g., uncompressed 8-bit audio, MP3, etc.).
 
• The Sequence number is just a counter that is incremented on each
RTP packet sent. It is used to detect lost packets.
 
• The Timestamp is produced by the stream’s source to note when the
first sample in the packet was made.
 
• The Synchronization source identifier tells which stream the packet
belongs to. It is the method used to multiplex and demultiplex
multiple data streams onto a single stream of UDP packets.
 
• Finally, the Contributing source identifiers, if any, are used when
mixers are present.
TCP(Transmission Control Protocol):
TCP used for most applications which requires reliable, sequenced delivery .
Features
• TCP is reliable protocol. That is, the receiver always sends either positive or
negative acknowledgement about the data packet to the sender, so that the
sender always has bright clue about whether the data packet is reached the
destination or it needs to resend it.
• TCP ensures that the data reaches intended destination in the same order it
was sent.
• TCP is connection oriented. TCP requires that connection between two remote
points be established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both receiver and
sender.
TCP Header:
• The length of TCP header is minimum 20 bytes
long and maximum 60 bytes.
• Source Port (16-bits)  - It identifies source port of the
application process on the sending device.
• Destination Port (16-bits) - It identifies destination port
of the application process on the receiving device.
• Sequence Number (32-bits) - Sequence number of data
bytes of a segment in a session.
• Acknowledgement Number (32-bits)  - When ACK flag is
set, this number contains the next sequence number of
the data byte expected and works as acknowledgement
of the previous data received.
• The TCP header length tells how many 32-bit words are
contained in the TCP header. This information is needed
because the Options field is of variable length,
Flags (1-bit each)
CWR and ECE are used to signal congestion when ECN
(Explicit Congestion Notification) is used,
CWR(Congestion Window Reduced) - When a host receives
packet with ECE bit set, it sets Congestion Windows Reduced
to acknowledge that ECE received.
ECE -It has two meanings:
If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
If SYN bit is set to 1, ECE means that the device is ECT capable.
URG - URG is set to 1 if the Urgent pointer is in use. The
Urgent pointer is used to
indicate a byte offset from the current sequence number
at which urgent data are to be found
ACK - The ACK bit is set to 1 to indicate that the
Acknowledgement number is valid.. If ACK is
cleared to 0, it indicates that packet does not
contain any acknowledgement.
PSH - When set, it is a request to the receiving
station to PUSH data (as soon as it comes) to
the receiving application without buffering it.
RST - Reset flag has the following features:
It is used to refuse an incoming connection.
It is used to reject a segment.
It is used to restart a connection.
SYN bit is used to establish connections.
The connection request has SYN = 1 and ACK = 0 to indicate that the piggyback
acknowledgement field is not in use.
The connection reply does bear an acknowledgement, however, so it has
SYN = 1 and ACK = 1. In essence, the SYN bit is used to denote both
CONNECTION REQUEST and CONNECTION ACCEPTED, with the ACK bit used
to distinguish between those two possibilities
FIN bit is used to release a connection. It specifies that the sender has no
more data to transmit
Windows Size  - This field is used for flow control between two stations and
indicates the amount of buffer (in bytes) the receiver has allocated for a
segment, i.e. how much data is the receiver expecting.
Checksum - This field contains the checksum of Header, Data and Pseudo
Headers.
Urgent Pointer  - It points to the urgent data byte if URG flag is set to 1.
Options  - It facilitates additional options which are not covered by the regular
header.
TCP Addressing:
TCP communication between two remote hosts
is done by means of port numbers (TSAPs).
Ports numbers can range from 0 – 65535
which are divided as:
• System Ports (0 – 1023)
• User Ports ( 1024 – 49151)
• Private/Dynamic Ports (49152 – 65535)
TCP Connection Establishment:
• Connections are established in TCP by means of the three-way handshake.
To establish a connection, one side, say, the server, passively waits for an
incoming connection by executing the LISTEN and ACCEPT primitives in that
order, either specifying a specific source or nobody in particular.
 
• The other side, say, the client, executes a CONNECT primitive, specifying
the IP address and port to which it wants to connect, the maximum TCP
segment size it is willing to accept, and optionally some user data (e.g., a
password). The CONNECT primitive sends a TCP segment with the SYN bit
on and ACK bit off and waits for a response.
 
• When this segment arrives at the destination, the TCP entity there checks
to see if there is a process that has done a LISTEN on the port given in the
Destination port field. If not, it sends a reply with the RST bit on to reject
the connection.
 
TCP Connection Release
• Although TCP connections are full duplex, to understand how
connections are released it is best to think of them as a pair
of simplex connections. Each simplex connection is released
independently of its sibling.
 
• To release a connection, either party can send a TCP
segment with the FIN bit set, which means that it has no
more data to transmit. When the FIN is acknowledged, that
direction is shut down for new data.
 
• Data may continue to flow indefinitely in the other direction,
however. When both directions have been shut down, the
connection is released. 
TCP Congestion Control:
• The network layer detects congestion when queues
grow large at routers and tries to manage it, if only
by dropping packets. It is up to the transport layer to
receive congestion feedback from the network layer
and slow down the rate of traffic that it is sending
into the network.
 
• In the Internet, TCP plays the main role in controlling
congestion, as well as the main role in reliable
transport. That is why it is such a special protocol.
 
PERFORMANCE PROBLEMS IN COMPUTER NETWORKS
Some performance problems, such as congestion, are caused by
temporary resource overloads. If more traffic suddenly arrives at a
router than the router can handle, congestion will build up and
performance will suffer.
Performance also degrades when there is a structural resource
imbalance. For example, if a gigabit communication line is attached to a
low-end PC, the poor host will not be able to process the incoming
packets fast enough and some will be lost. These packets will eventually
be retransmitted, adding delay, wasting bandwidth, and generally
reducing performance.
Overloads can also be synchronously triggered. As an example, if a
segment contains a bad parameter , in many cases the receiver will
thoughtfully send back an error notification.
 Another tuning issue is setting timeouts.. If the timeout is set too short,
unnecessary retransmissions will occur. If the timeout is set too long,
unnecessary delays will occur after a segment is lost.
NETWORK PERFORMANCE MEASUREMENT:
1) Make Sure That the Sample Size Is Large Enough
  Do not measure the time to send one segment, but repeat the
measurement, say, one million times and take the average.
 
2) Make Sure That the Samples Are Representative
  Ideally, the whole sequence of one million measurements should
be repeated at different times of the day and the week to see the
effect of different network conditions on the measured quantity.
  3) Caching Can Wreak Havoc with Measurements
  Repeating a measurement many times will return an
unexpectedly fast answer if the protocols use caching
mechanisms.
 
4) Be Sure That Nothing Unexpected Is Going On during Your
Tests
Making measurements at the same time that some user has
decided to run a video conference over your network will
often give different results than if there is no video
conference.

5) Be Careful When Using a Coarse-Grained Clock


  Computer clocks function by incrementing some counter at
regular intervals.
 
6) Be Careful about Extrapolating the Results
Suppose that you make measurements with simulated
network loads running from 0 (idle) to 0.4 (40% of capacity).

You might also like