Computer Network Security (18CS52) : Module2 (Syllabus)
Computer Network Security (18CS52) : Module2 (Syllabus)
Computer Network Security (18CS52) : Module2 (Syllabus)
Module2 (Syllabus)
Transport Layer : Introduction and Transport-Layer Services: Relationship Between Transport and
Network Layers, Overview of the Transport Layer in the Internet, Multiplexing and Demultiplexing:
Connectionless Transport: UDP,UDP Segment Structure, UDP Checksum, Principles of Reliable Data
Transfer: Building a Reliable Data Transfer Protocol, Pipelined Reliable Data Transfer Protocols, Go-
Back-N, Selective repeat, Connection-Oriented Transport TCP: The TCP Connection, TCP Segment
Structure, Round- Trip Time Estimation and Timeout, Reliable Data Transfer, Flow Control, TCP
Connection Management, Principles of Congestion Control: The Causes and the Costs of Congestion,
Approaches to Congestion Control, Network-assisted congestion-control example, ATM ABR Congestion
control, TCP Congestion Control: Fairness.
On the sending side, the transport layer converts the application-layer messages it receives from a
sending application process into transport-layer packets, known as transport-layer segments.
This is done by (possibly) breaking the application messages into smaller chunks and adding a
transport-layer header to each chunk to create the transport-layer segment.
The transport layer then passes the segment to the network layer at the sending end system, where
the segment is encapsulated within a network-layer packet (a datagram) and sent to the destination.
On the receiving side, the network layer extracts the transport-layer segment from the datagram
and passes the segment up to the transport layer.
The transport layer then processes the received segment, making the data in the segment available
to the receiving application.
Internet has two protocols—TCP and UDP. Each of these protocols provides a different set of
transport-layer services to the invoking application.
Relationship between Transport and Network Layers
• A transport-layer protocol provides logical-communication b/w processes running on differenthosts.
1) Reliable data transfer i.e. guarantees data will arrive to destination-process correctly.
3) Error checking.
The Internet’s network-layer protocol has Internet Protocol. IP provides logical communication
between hosts.
The IP service model is a best-effort delivery service. This means that IP makes its “best effort” to
deliver segments between communicating hosts, but it makes no guarantees. In particular, it does
not guarantee segment delivery, it does not guarantee orderly delivery of segments, and it does not
guarantee the integrity of the data in the segments.
The most fundamental responsibility of UDP and TCP is to extend IP’s delivery service between
two end systems to a delivery service between two processes running on the end systems.
Extending host-to-host delivery to process-to-process delivery is called transport- layer
multiplexing and demultiplexing.
UDP and TCP also provide integrity checking by including error detection fields in their segments’
headers.
UDP is an unreliable service it does not guarantee that data sent by one process will arrive intact to
the destination process.
TCP, on the other hand, offers several additional services to applications. First and foremost, it provides
reliable data transfer. Using flow control, sequence numbers, acknowledgments,
• The sockets are used to pass data from the network to the process and vice versa.
is called multiplexing.
2) Demultiplexing
At the receiver, the transport-layer
• In Figure 2.1,
In the middle host, the transport-layer must demultiplex segments arriving from the
At the destination host, the transport layer receives segments from the network layer just below.
The transport layer has the responsibility of delivering the data in these segments to the appropriate
application process running in the host.
A process can have one or more sockets, doors through which data passes from the network to the
process and through which data passes from the process to the network.
The transport layer in the receiving host does not actually deliver data directly to a process, but
instead to an intermediary socket.
Because at any given time there can be more than one socket in the receiving host, each socket has
a unique identifier.
Each transport-layer segment has a set of fields in the segment to help receiver to deliver data to
appropriate process socket.
At the receiving end, the transport layer examines these fields to identify the receiving socket and
then directs the segment to that socket. This job of delivering the data in a transport-layer segment
to the correct socket is called demultiplexing.
The job of gathering data chunks at the source host from different sockets, encapsulating each data
chunk with header information to create segments, and passing the segments to the network layer
is called multiplexing.
Faiz Aman, Asst. Professor, Dept. of CS&E, BCE, Shravanabelagola 4
Computer Network Security (18CS52) Module 2
Endpoint Identification
• Each segment must include 2 header-fields to identify the socket (Figure 2.2):
• The port-numbers ranging from 0 to 1023 are called well-known port-numbers and are
• Answer:
• Suppose process on Host-A (port 19157) wants to send data to process on Host-B (port
→ creates a segment containing source-port 19157, destination-port 46428 & data and
→ passes then the resulting segment to the network-layer.
• At the receiver B, the transport-layer
Source-port-no from Host-A is used at Host-B as "return address" i.e. when B wants to send a segment
back to A.
Connection Oriented Multiplexing and Demultiplexing
3) Source IP address
4) Source-port-no
5) Destination IP address &
6) Destination-port-no.
→ attached to a process.
→ identified by its own four tuple.
• When a segment arrives at the host, all 4 fields are used to direct the segment to the
• When clients (ex: browsers) send segments to the server, all segments will have destination-port 80.
• The server distinguishes the segments from the different clients using two-tuple:
• The server can use either i) persistent HTTP or ii) non-persistent HTTP
i) Persistent HTTP
Throughout the duration of the persistent connection the client and server
Figure 2.5: Two clients, using the same destination-port-no (80) to communicate with the
same Web-server application
Connectionless Transport: UDP
Unreliable service means UDP doesn‘t guarantee data will arrive to destination-process.
Connectionless means there is no handshaking b/w sender & receiver before sending data.
for the multiplexing/demultiplexing service, adds two other small fields, and passes the resulting
segment to the network layer.
• The network layer encapsulates the transport-layer segment into an IP datagram and then makes a
1) Finer Application Level Control over what Data is Sent, and when.
In TCP, a congestion-control mechanism throttles the sender when the n/w is congested
2) TCP uses a three-way handshake before it starts to transfer data.
UDP just immediately passes the data without any formal preliminaries.
• Table 2.1: Popular Internet applications and their underlying transport protocols
UDP Checksum
The checksum is used to determine whether bits within the UDP segment have been altered as it moved
from source to destination.
Step1: Add all the data elements using binary addition (Modulo-2 addition). If you get extra bit wrap
it.
The sending side of the data transfer protocol will be invoked from above by a call to rdt_send(). It
will pass the data to be delivered to the upper layer at the receiving side.
On the receiving side, rdt_rcv() will be called when a packet arrives from the receiving side of the
channel.
When the rdt protocol wants to deliver data to the upper layer, it will do so by calling
deliver_data().
Both the send and receive sides of rdt send packets to the other side by a call to udt_send()
Here all packet flow is from the sender to receiver; with a perfectly reliable channel there is no
need for the receiver side to provide any feedback to the sender since nothing can go wrong.
Also we have assumed that the receiver is able to receive data as fast as the sender happens to
send data. Thus, there is no need for the receiver to ask the sender to slow down.
i) The arrows indicate the transition of the protocol from one state to another.
ii) The event causing the transition is shown above the horizontal line labelling the transition.
iii) The action taken when the event occurs is shown below the horizontal line.
→ accepts data from the upper layer via the rdt send(data) event
→ creates a packet containing the data (via the action make pkt(data)) and
→ sends the packet into the channel.
• On the receiver, rdt
→ receives a packet from the underlying channel via the rdt rcv(packet) event
→ removes the data from the packet (via the action extract (packet, data)) and
→ passes the data up to the upper layer (via the action deliver data(data)).
Faiz Aman, Asst. Professor, Dept. of CS&E, BCE, Shravanabelagola 13
Computer Network Security (18CS52) Module 2
Error-correction techniques allow the receiver to detect and correct packet bit-errors.
1) Receiver Feedback
Since the sender and receiver are typically executing on different end-systems.
The only way for the sender to learn about status of the receiver is by the
A packet that is received in error at the receiver will be retransmitted by the sender.
1) In one state, the protocol is waiting for data to be passed down from the upper layer.
2) In other state, the protocol is waiting for an ACK or a NAK from the receiver.
→ knows that the most recently transmitted packet has been received correctly
→ returns to the state of waiting for data from the upper layer.
ii) If a NAK is received, the protocol
• On packet arrival, the receiver replies with either an ACK or a NAK, depending on the received
If an ACK or NAK is corrupted, the sender cannot know whether the receiver has correctly
received the data or not.
• Solution: The sender resends the current data packet when it receives garbled ACK or NAK packet.
• A 1-bit sequence-number allows the receiver to know whether the sender is sending
• Figure 2.10 and 2.11 shows the FSM description for rdt2.1.
i) When out-of-order packet is received, the receiver sends a positive acknowledgment (ACK).
ii) When a corrupted packet is received, the receiver sends a negative acknowledgment (NAK).
• Consider data transfer over an unreliable channel in which packet lose may occur.
• Solution:
The sender
• The timer must interrupt the sender after a given amount of time has expired.
Faiz Aman, Asst. Professor, Dept. of CS&E, BCE, Shravanabelagola 18
Computer Network Security (18CS52) Module 2
• Figure 2.14 shows the sender FSM for rdt3.0, a protocol that reliably transfers data over
2) The sender and receiver may have to buffer more than one packet.
Go-Back-N (GBN)
In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when
available) without waiting for an acknowledgment, but is constrained to have no more than some
maximum allowable number, N, of unacknowledged packets in the pipeline.
If base is the sequence number of the oldest unacknowledged packet and nextseqnum is the
smallest unused sequence number (that is, the sequence number of the next packet to be sent).
Sequence numbers in the interval [0,base-1] correspond to packets that have already been
transmitted and acknowledged.
The interval [base,nextseqnum-1] corresponds to packets that have been sent but not yet
Faiz Aman, Asst. Professor, Dept. of CS&E, BCE, Shravanabelagola 21
Computer Network Security (18CS52) Module 2
acknowledged.
Sequence numbers in the interval [nextseqnum,base+N-1] can be used for packets that can be sent
immediately, should data arrive from the upper layer.
Sequence numbers greater than or equal to base+N cannot be used until an unacknowledged packet
currently in the pipeline has been acknowledged.
The range of permissible sequence numbers for transmitted but not yet acknowledged packets can
be viewed as a window of size N over the range of sequence numbers. N is often referred to as the
window size and the GBN protocol itself as a sliding-window protocol.
If k is the number of bits in the packet sequence number field, the range of sequence numbers is
thus [0,2k – 1]. With a finite range of sequence numbers, all arithmetic involving sequence
numbers must then be done using modulo 2k arithmetic.
In our GBN protocol, the receiver discards out-of-order packets. GBN discard a correctly received
but out-of-order packet.
Suppose now that packet n is expected, but packet n + 1 arrives. Because data must be delivered in
order, the receiver could buffer (save) packet n + 1 and then deliver this packet to the upper layer after
it had later received and delivered packet n. However, if packet n is lost, both it and packet n + 1 will
eventually be retransmitted as a result of the GBN retransmission rule at the sender. Thus, the receiver
can simply discard packet n + 1.
Operation of the GBN Protocol
• Figure 2.20 shows the operation of the GBN protocol for the case of a window-size of four packets.
• The sender then must wait for one or more of these packets to be acknowledged beforeproceeding.
• As each successive ACK (for ex, ACK0 and ACK1) is received, the window slides forward
and the sender transmits one new packet (pkt4 and pkt5, respectively).
• On the receiver, packet 2 is lost and thus packets 3, 4, and 5 are found to be out of order and are
discarded.
Faiz Aman, Asst. Professor, Dept. of CS&E, BCE, Shravanabelagola 24
Computer Network Security (18CS52) Module 2
When the window-size and bandwidth-delay product are both large, many packets can be in
the pipeline.
Thus, a single packet error results in retransmission of a large number of packets.
Figure 2.21: Selective-repeat (SR) sender and receiver views of sequence-number space
• The sender retransmits only those packets that it suspects were erroneous.
• A window-size N is used to limit the no. of outstanding, unacknowledged packets in the pipeline.
SR Sender
• The various actions taken by the SR sender are as follows:
When data is received from above, the sender checks the next available sequence-
Each packet must have its own logical timer. This is because
In this case,
→ received packet falls within the receiver‘s window and
→ selective ACK packet is returned to the sender.
If the packet was not previously received, it is buffered.
If this packet has a sequence-number equal to rcv_base, then this packet, and
anypreviously buffered and consecutively numbered packets are delivered to the upper
layer.
The receive-window is then moved forward by the no. of packets delivered to the upperlayer.
In this case, an ACK must be generated, even though this is a packet that the
Table 2.2: Summary of reliable data transfer mechanisms and their use
Mechanism Use, Comments
Checksum Used to detect bit errors in a transmitted packet.
Timer Used to timeout/retransmit a packet because the packet (or its ACK) was lost.
Because timeouts can occur when a packet is delayed but not lost,
duplicate copies of a packet may be received by a receiver.
Sequence-number Used for sequential numbering of packets of data flowing from sender to
receiver.
Gaps in the sequence-numbers of received packets allow the receiver to
detect a lost packet.
Packets with duplicate sequence-numbers allow the receiver to detect duplicate
copies of a packet.
Acknowledgment Used by the receiver to tell the sender that a packet or set of packets has
been received correctly.
Acknowledgments will typically carry the sequence-number of the packet or
packets being acknowledged.
Acknowledgments may be individual or cumulative, depending on the protocol.
Negative Used by the receiver to tell the sender that a packet has not been received
acknowledgment correctly.
Negative acknowledgments will typically carry the sequence-number of the
packet that was not received correctly.
Window, pipelining The sender may be restricted to sending only packets with sequence- numbers
that fall within a given range.
By allowing multiple packets to be transmitted but not yet acknowledged,
sender utilization can be increased over a stop-and-wait mode of operation.
process correctly.
• TCP provides flow-control, error-control and congestion-control.
1) Connection Oriented
TCP is said to be connection-oriented. This is because
The 2 application-processes must first establish connection with each other before they
begin communication.
Both application-processes will initialize many state-variables associated with the connection.
The routers do not maintain any state-variables associated with the connection.
Both application-processes can transmit and receive the data at the same time.
4) Point-to-Point
iii) Finally, the client responds again with a third segment containing payload (or data).
MSS limits the maximum amount of data that can be placed in a segment.
TCP directs this data to the connection’s send buffer, which is one of the buffers that is set aside
during the initial three-way handshake.
From time to time, TCP will grab chunks of data from the send buffer and pass the data to the
network layer.
The maximum amount of data that can be grabbed and placed in a segment is limited by the
maximum segment size (MSS).
TCP Segment Structure
• The segment consists of header-fields and a data-field.
• When TCP sends a large file, it breaks the file into chunks of size MSS.
3) Header Length
This field specifies the length of the CP header.
i) ACK
¤ This bit indicates that value of acknowledgment
field is valid. ii) RST, SYN & FIN
¤ This bit indicates the sender has invoked the push operation.
iv) RG
¤ This bit indicates the segment contains urgent-data.
5) Receive Window
This field indicates the location of the last byte of the urgent data.
8) Options
This field is used when a sender & receiver negotiate the MSS for use in high-speed networks.
Sequence
Numbers and
Acknowledgment
Numbers
Sequence
Numbers
• The sequence-number is used for sequential numbering of packets of data flowing from
sender to receiver.
• Applications:
1) Gaps in the sequence-numbers of received packets allow the receiver to detect a lost packet.
Figure 2.25: Sequence and acknowledgment-numbers for a simple Telnet application over TCP
sequence- number of the next byte, Host-B is expecting from Host-A. (i.e. ACK=43).
What does a host do when it receives out-of-
2) The receiver
• As shown in Figure 2.27, suppose client initiates a Telnet session with server.
→ letter ‗C‘
→ sequence-number 42
→ acknowledgment-number 79
The second-segment is sent from the server to the client.
to-client data.
This acknowledgment is said to be piggybacked on the server-to-client data-segment.
Figure 2.27: Sequence and acknowledgment-numbers for a simple Telnet application over TCP
• Clearly, the timeout should be larger than the round-trip-time (RTT) of the connection.
―The amount of time b/w when the segment is sent and when an acknowledgment is received.‖
• Obviously, the SampleRTT values will fluctuate from segment to segment due to congestion.
• DevRTT is defined as
• If the SampleRTT values have little fluctuation, then DevRTT will be small.
If the SampleRTT values have huge fluctuation, then DevRTT will be large.
Setting and Managing the Retransmission Timeout Interval
IP does not guarantee in-order delivery of data. IP does not guarantee the integrity of the data.
• TCP creates a reliable data-transfer-service on top of IP‘s unreliable-service.
→ data-stream is uncorrupted
→ data-stream is without duplication and
→ data-stream is in sequence.
A Few Interesting Scenarios
First Scenario
• As shown in Figure 2.28, Host-A sends one segment to Host-B.
• In this case, the timeout event occurs, and Host-A retransmits the same segment.
• When Host-B receives retransmission, it observes that the sequence-no has already been received.
Second Scenario
• As shown in Figure 2.29, Host-A sends two segments back-to-back.
• When the timeout event occurs, Host-A resends the first-segment and restarts the timer.
• The second-segment will not be retransmitted until ACK for the second-segment arrives before
• But just before the timeout event, Host-A receives an acknowledgment-no 120.
• Therefore, Host-A knows that Host-B has received all the bytes up to 119.
• The sender can often detect packet-loss well before the timeout occurs by noting duplicate ACKs.
• A duplicate ACK refers to ACK the sender receives for the second time. (Figure 2.31).
Arrival of out-of-order segment with higher- Immediately send duplicate ACK, indicating
than-expected sequence-number. sequence-number of next expected-byte.
Gap detected.
Arrival of segment that partially or completely Immediately send ACK.
fills in gap in received-data.
Figure 2.31: Fast retransmit: retransmitting the missing segment before the segment‘s timer expires
Flow Control
• TCP provides a flow-control service to its applications.
• A flow-control service eliminates the possibility of the sender overflowing the receiver-buffer.
1) Acknowledged data
3) Data to be transmitted
• Send buffer maintains 2 pointers: LastByteAcked and LastByteSent. The relation b/w these two is:
Receive Buffer
• Receiver maintains receive buffer to hold data even if it arrives out-of-order.
• Receive buffer maintains 2 pointers: LastByteRead and LastByteRcvd. The relation b/w these two is:
• Receiver throttles the sender by advertising a window that is smaller than the amount of
Connection Release
• Either of the two processes in a connection can end the connection.
Figure 2.35: Congestion scenario 1: Two connections sharing a single hop with infinite buffers
Let,
Sending-rate of Host-A = λin bytes/sec
Outgoing Link‘s capacity = R
Packets from Hosts A and B pass through a router and over a shared outgoing link.
The router has buffers.
The buffers stores incoming packets when packet-arrival rate exceeds the outgoing link‘s
capacity.
Figure 2.36: Congestion scenario 1: Throughput and delay as a function of host sending-rate
• Figure 2.36 plots the performance of Host-A‘s
• For a sending-rate b/w 0 and R/2, the throughput at the receiver equals the sender‘s sending-
rate. However, for a sending-rate above R/2, the throughput at the receiver is only R/2.
(Figure 2.36a)
• Conclusion: he link cannot deliver packets to a receiver at a steady-state rate that exceeds R/2.
As the sending-rate approaches R/2, the average delay becomes larger and larger
Conclusion: Large queuing delays are experienced as the packet arrival rate nears the link capacity.
If a packet is dropped at the router, the sender will eventually retransmit it.
Let,
Application‘s sending-rate of Host-A = λin bytes/sec
Transport -layer‘s sending-rate of Host-A = λin‗ bytes/sec (also called offered-load to network)
Outgoing Link‘s capacity = R
Figure 2.37: Scenario 2: Two hosts (with retransmissions) and a router with finite buffers
• In this case,
→ no loss occurs
→ λin will be equal to λin‗, and
→ throughput of the connection will be equal to λin.
• The sender retransmits only when a packet is lost.
• The rate at which data are delivered to the receiver application is R/3.
• The sender must perform retransmissions to compensate for lost packets due to buffer overflow.
• Both the original data packet and the retransmission may reach the receiver.
• The receiver needs one copy of this packet and will discard the retransmission.
The work done by the router in forwarding the retransmitted copy of the original packet was wasted
Scenario 3: Four Senders, Routers with Finite Buffers, and Multihop Paths
Figure 2.39: Four senders, routers with finite buffers, and multihop paths
• Consider the connection from Host-A to Host C, passing through routers R1 and R2.
The A–C traffic arriving to router R2 can have an arrival rate of at most R regardless
If λin‗ is extremely large for all connections, then the arrival rate of B–D traffic at R2 can
Thus, the amount of A–C traffic that successfully gets through R2 becomes
smallerand smaller as the offered-load from B–D gets larger and larger.
In the limit, as the offered-load approaches infinity, an empty buffer at R2 is
immediately filled by a B–D packet, and the throughput of the A–C connection at R2
goes to zero.
When a packet is dropped along a path, the transmission capacity ends up having been wasted.
Congestion information is fed back from the network to the sender in one of two ways:
i) Direct feedback may be sent from a network-router to the sender (Figure 2.40).
• ATM (Asynchronous Transfer Mode) protocol uses network-assisted approach for congestion-control.
i) When the network is underloaded, ABR has to take advantage of the spare available bandwidth.
ii) When the network is congested, ABR should reduce its transmission-rate.
• Data-cells are transmitted from a source to a destination through a series of intermediate switches.
• When an RM-cell arrives at a destination, the cell will be sent back to the sender
The destination must check the EFCI bit in all received data-cells.
If the most recently received data-cell has the EFCI bit set to 1, then the destination
A switch
In this manner, ER field will be set to minimum supportable rate of all switches on the path
Each sender limits the rate at which it sends traffic into its connection as a
ii) If sender perceives that there is congestion, then sender reduces its data-rate.
1) How does a sender limit the rate at which it sends traffic into its connection?
• The sender keeps track of an additional variable called the congestion-window (cwnd).
• The amount of unacknowledged-data at a sender will not exceed minimum of (cwnd & rwnd), that is:
→ timeout or
→ receipt of 3 duplicate ACKs from the receiver.
Due to excessive congestion, the router-buffer along the path overflows. This
The sender considers the loss event as an indication of congestion on the path.
TCP
→ will take the arrival of these acknowledgments as an indication that all is well and
→ will use acknowledgments to increase the window-size (& hence data-rate).
TCP is said to be self-clocking because
1) Slow start
3) Fast recovery.
Slow Start
• When a TCP connection begins, the value of cwnd is initialized to 1 MSS.
• TCP doubles the number of packets sent every RTT on successful transmission.
• Thus, the TCP data-rate starts slow but grows exponentially during the slow start phase.
Congestion Avoidance
• On entry to congestion-avoidance state, the value of cwnd is approximately half its previous value.
• The sender must increases cwnd by MSS bytes (MSS/cwnd) whenever a new acknowledgment arrives
→ value of cwnd is halved and → value of ssthresh is set to half the value of cwnd.
Fast Recovery
• The value of cwnd is increased by 1 MSS for every duplicate ACK received.
• When an ACK arrives for the missing segment, the congestion-avoidance state is entered.
TCP Tahoe
Figure 2.44 illustrates the evolution of CP‘s congestion-window for both Reno and Tahoe.
• AIMD congestion-control gives rise to the ―saw tooth‖ behavior shown in Figure 2.45.
• TCP
→ increases linearly the congestion-window-size until a triple duplicate-ACK event occurs and
→ decreases then the congestion-window-size by a factor of 2
Fairness
• Congestion-control mechanism is fair if each connection gets equal share of the link-bandwidth.
• As shown in Figure 2.46, consider 2 TCP connections sharing a single link with transmission-rate R.
• Assume the two connections have the same MSS and RTT.
• Figure 2.47 plots the throughput realized by the two TCP connections.
then the throughput falls along the 45-degree arrow starting from the origin.
• Many multimedia-applications (such as Internet phone) often do not run over TCP.
→ applications can pump their audio into the network at a constant rate and
→ occasionally lose packets.
2.7.2.2 Fairness and Parallel TCP Connections
• Web browsers use multiple parallel-connections to transfer the multiple objects within a Webpage.
• Thus, the application gets a larger fraction of the bandwidth in a congested link.
• ‗.‘ Web-traffic is so pervasive in the Internet; multiple parallel-connections are common nowadays.