A Simulation-Based Study of TCP Dynamics Over HFC Networks: Omar Elloumi, Nada Golmie, Hossam A®®, David Su
A Simulation-Based Study of TCP Dynamics Over HFC Networks: Omar Elloumi, Nada Golmie, Hossam A®®, David Su
A Simulation-Based Study of TCP Dynamics Over HFC Networks: Omar Elloumi, Nada Golmie, Hossam A®®, David Su
www.elsevier.com/locate/comnet
Abstract
New broadband access technologies such as hybrid ®ber coaxial (HFC) are likely to provide fast and cost eective
support to a variety of applications including video on demand (VoD), inter-active computer games, and Internet-type
applications such as Web browsing, ftp, e-mail, and telephony. Since most of these applications use TCP as the
transport layer protocol, the key to their eciency largely depends on TCP protocol performance.
We investigate the performance of TCP in terms of eective throughput in an HFC network environment using
dierent load conditions and network buer sizes. We ®nd that TCP experiences low throughput as a result of the well-
known problem of ACK compression. An algorithm that controls ACK spacing is introduced to improve TCP per-
formance. Ó 2000 Elsevier Science B.V. All rights reserved.
width allocation [8,9]. Also, some preliminary results. In Section 6 a new algorithm for requesting
work has been presented on improving the ABR bandwidth for TCP ACK packets on HFC up-
service over HFC in [10]. But so far, little work has stream channel is described along with some per-
been done in studying the details and evaluating formance results. Concluding remarks are
the performance of the TCP protocol in an HFC presented in Section 7. Additional details on TCP
network environment. dynamics are given in Appendix A.
The performance of TCP in networks with slow
ACKs channel is studied in [3,14]. In [14], the au-
thors focus on the eect of asymmetric networks on 2. HFC MAC protocol overview
TCP performance and show, by means of analysis
and simulation, the performance degradation of The frame format of the MAC protocol de®ned
TCP Reno, due to frequent timeouts. However, this in [6] is shown in Fig. 1. The upstream channel is
study does not consider a speci®c MAC protocol in divided into discrete basic time slots, called min-
the reverse path (upstream channel). This model is islots. A variable number of minislots are grouped
appropriate for ADSL modems, or con®gurations to form a MAC layer frame as shown in Fig. 1.
that use a telephone line or cellular phone medium The headend determines the frame format by set-
in the reverse channel where the only delay is the ting the number of data slots (DS) and contention
sum of the queuing delay and the propagation de- slots (CS) in each frame and sends this information
lay. However, in the case of multiple access media, to the stations on the downstream using a CS
such as HFC, it is important to study the eect of Allocation message. Several minislots can be
MAC protocol and bandwidth reservation on TCP grouped together in order to form a DS that car-
performance. Finally, a study on the eect of ran- ries a MAC packet data unit (MPDU) which is
dom losses on TCP performance in an HFC envi- assumed to be an ATM cell plus the MAC layer
ronment [2] proposes some solutions to improve overhead. In Fig. 1 four minislots carry one
the performance but again does not take into ac- MPDU. The DS are explicitly allocated to a spe-
count the eect of the MAC layer. ci®c station by the headend using DS Grant mes-
We propose an algorithm that improves TCP sages sent on the downstream. CS ®t into one
performance under dierent oered loads and minislot and are used by the stations to transmit
TCP data buer sizes. requests for bandwidth. Since more than one sta-
The rest of the paper is structured as follows. tion can transmit a request at the same time, CS
Section 2 presents the MAC model as speci®ed in are prone to collisions. The headend controls the
[6]. Sections 3 and 4 give background information initial access to the CS slots as well as manages the
on TCP and describe the simulation model, re- CRP by assigning a request queue (RQ) number to
spectively. Section 5 presents TCP performance each CS.
The basic MAC operation is as follows. Upon describe two basic concepts of TCP related to
the arrival of the ®rst data packet, a station congestion avoidance and control.
generates a request minislot data unit (RMDU) TCP congestion avoidance and control mecha-
and waits for a CS Allocation message from the nisms have signi®cantly evolved in the past few
headend that reserves a group of CS with RQ 0 years although the protocol packet format and its
for newcomer transmission. The station randomly state machine have remained unchanged. Most
selects a CS in that group and transmits its versions of TCP control mechanisms aim at im-
RMDU. Since multiple stations may attempt to proving the estimation of available network
send their RMDUs in the same upstream CS a bandwidth and preventing timeouts in order to
collision may occur. A Feedback message is sent maintain stability and throughput.
to the station after a round trip time (which is The slow start algorithm was proposed by
also equal to a frame length) informing it of the Jacobson [11] as a congestion avoidance and
status of the CS used (note that a station has no control algorithm for TCP after a congestion col-
means of knowing the status of its request since lapse of the Internet. This algorithm introduces a
its transmitter and receiver are tuned on dierent congestion window mechanism to control the
frequencies). number of bytes that the sender is able to transmit
In case of a successful request transmission before waiting for an acknowledgment. For each
(Feedback Successful), the station activates its received acknowledgment, two new segments are
data transmission state machine and exits the sent. When the window size reaches a threshold
contention process. Subsequently a Data Grant value, SSThreshold, the algorithm operates in
message will be sent by the headend. There are two congestion avoidance mode. The slow start is
grant scheduling algorithms considered in this triggered every retransmission timeout by setting
paper: ®rst come ®rst serve (FCFS) and round SSThreshold to half the congestion window and
robin (RR). For FCFS, the HE grants each station the congestion window to one segment. In the
the totality of the requested slots before giving any congestion avoidance phase, the congestion win-
grants to other stations. For RR, the grants are dow is increased by one segment every round trip
distributed to stations in a round robin fashion. A time (RTT). Thus when the mechanism anticipates
station sends a cell and waits for all the other a congestion, it increases the congestion window
stations that have successfully transmitted requests linearly rather than exponentially. The upper limit
to the HE to send their data. Once a station is for this region is the value of the receiver's ad-
assigned a DS to send its data it may use a special vertised window. If the transmitter receives three
®eld in the MPDU to send other requests in pig- duplicate acknowledgments, SSThreshold is set to
gybacking thus bypassing the contention process. half of the preceding congestion window size while
In case of a collided CS, the feedback message this latter is set to one packet for TCP Tahoe and
contains a particular RQ number to be used for half of the previous congestion window for TCP
collision resolution (Feedback RQ). That is the Reno. At this point the algorithm assumes that the
station needs to retransmit its request in a CS packet is lost, and retransmits it before the timer
group with that RQ number. The CS groups are expires. This algorithm is known as the fast-re-
usually allocated in the order of decreasing RQ transmit mechanism.
values. For each RQ value, the headend assigns a TCP Tahoe experiences low throughput when
group of CS. A CS within the group is selected packets are lost because of the slow-start algo-
randomly in the range
0::2. rithm. TCP Reno performs better in the case of
single packet loss within one window of data be-
cause the congestion window is decreased by half
3. TCP protocol background information rather than set to one. However, in the case of
multiple packet losses, TCP Reno also experiences
Most of today's Internet applications use the low throughput since it can easily be subject to
TCP protocol as de®ned in [15]. In this section, we timeouts leading to long idle periods. This is out-
310 O. Elloumi et al. / Computer Networks 21 (2000) 307±323
lined in [5,14]. In [5], the author proposes the so- with a smaller spacing than the spacing corre-
called ``New-Reno'' algorithm to avoid this prob- sponding to their data packets (called ACK com-
lem. pression) [13,21]. Furthermore, ACK compression
TCP self-clocking principle [11] estimates the may lead to a rapid network queue build-up and a
bottleneck bandwidth by letting the sender rate high packet loss percentage as shown in [21]. Even
exactly match the available bandwidth along the with in®nite buers, the network utilization is ex-
network path. Thus if the time to process packets pected to drop considerably.
at the receiver is constant, ACKs are spaced ac-
cording to the bottleneck in the forward path. If
the network is symmetric, the ACK spacing is 4. Simulation model
preserved in the backward direction since the ACK
packet size is much smaller than the data packet In this section we provide a description of the
size and thus less likely to encounter congestion. simulation environment and parameters. We use
When the sender transmits a packet for each ar- the NIST ATM network simulator [7] to imple-
riving ACK, the packet sending rate matches the ment TCP Tahoe and TCP Reno as described in
bottleneck service rate and this constitutes a ``self- [20]. The simulator also features a MAC protocol
clocking'' mechanism, an ``idealized state'' as for HFC networks as described in [6]. The network
mentioned in [13,18,21]. However, this mechanism model considered is illustrated in Fig. 2. It consists
fails in the case of delay variations in both the of a hybrid ATM-HFC con®guration where a TCP
forward and backward directions. In particular, in source in the ATM network sends packets to a
the case of two-way TCP trac, ACK spacing is TCP destination in the HFC network. We believe
not preserved due to the interaction between data that this set-up re¯ects a typical usage in an HFC
packets and ACKs: ACKs accumulate behind network environment since most applications in
large data packets and then leave the bottleneck HFC networks serving residences, such as Web
browsing (HTTP) or ®le downloads (FTP), suggest Fig. 3 illustrates the protocol stack for a TCP
that the amount of data sent by the stations is connection end system. We use an AAL5 encap-
much less than the amount of data received. sulation of IP packets. After segmentation, cells
The selected network topology is chosen with are queued in a FIFO queue inside the MAC layer.
one TCP connection for a better understanding of In this paper, unlike what is described in [14], we
TCP dynamics. Link capacity between TCP source assume that the FIFO queue inside the MAC layer
and SW1 is set to 155 Mbits/s, while the link ca- is large enough to accommodate all incoming
pacity between SW1 and SW2 is set to 6 Mbits/s to ACKs from a single TCP connection. This is a
represent the bottleneck link. The propagation realistic assumption since for a 1 KB packet size
delay between SW1 and SW2 is set to 1:25 ms. and a 64 KB maximum congestion window, a
SW1 and SW2 implement the early packet discard maximum of 64 ACKs can accumulate in the
strategy (EPD) [16]. FIFO queue (we believe that losses should not
The TCP source is assumed to have an in®- happen in a transmitter). Also, we assume that
nite number of packets to send. In order to driver interface buers can hold up to 50 IP
stress contention on the upstream channel, we packets of 1 KB each [19] (Table 1).
consider 200 stations with on±o sources send- Simulation parameters for both the MAC and
ing data upstream. This constitutes the back- TCP protocols are given in Tables 2 and 3, re-
ground trac. Sources generate ®xed size 48 spectively.
byte packets (encapsulated in ATM cells) ac-
cording to a Poisson distribution with a mean
arrival rate of k
L UR 48=
53 N . L Table 1
is the percentage of the oered load, UR is the MAC model parameters
upstream channel rate and N is the number of Simulation parameter Value
stations. Number of active stations 200
Distance from nearest/farthest station to 25/200 km
headend
Downstream data transmission rate 30 Mbits/s
Upstream data transmission rate 3 Mbits/s
Propagation delay 5 ls/km
Length of simulation run 15 s
Length of run prior to gathering statistics 1s
Guardband and preamble between trans- Duration of 5
missions bytes
Data slot size 64 bytes
Contention Slot Size 16 bytes
DS/CS size ratio 4:1
Cluster size 2.27 ms
Maximum request size 32
Headend processing delay 1 ms
Table 2
TCP parameters
Simulation parameter Value
Maximum transfer unit (MTU) 1 KB
Timeout granularity 500 ms
Packet processing time 100 ls
Congestion avoidance algorithm TCP Reno
Maximum congestion window size 64 KB
Initial SSThreshold 8 KB
Fig. 3. TCP end node model.
312 O. Elloumi et al. / Computer Networks 21 (2000) 307±323
Table 3 1
10% offered load
Bandwidth-RTT product 20% offered load
30% offered load
40% offered load
Percentage of Mean (KB) Max (KB) r 0.8
50% offered load
Effective throughput
0.6
20 13.316 28.923 3.253
30 15.300 34.426 4.081
40 27.433 103.663 11.262 0.4
50 193.019 462.776 97.942
0.2
0.6
0
40 60 80 100 120 140 160
EPD threshold (cells)
forward link. ACK compression is known to break Packet size=l1 1:55 ms where Packet size is the
down the TCP self-clocking algorithm because size of the data packet and l1 is the forward link
compressed ACKs clock out data packets at a rate capacity. This is also equal to the minimum spac-
equal to their arrival rate. The ACK compression ing between two TCP data packets. In our simu-
behavior is usually observed when ACKs en- lation we assume that the time to process a TCP
counter non-empty queues (in the backward di- packet and send its corresponding ACK is con-
rection). When ACKs leave the bottleneck buer stant. Thus the spacing between ACKs should be
their spacing is smaller than the original spacing at equal to the spacing between their corresponding
queue entry [1,21]. To depict this problem we plot data packets. However, in Figs. 6 and 7 for a large
the interarrival time of consecutive ACKs for 30% number of received ACKs, the spacing between
oered load with FCFS and RR in Figs. 6 and 7, them is less than the minimum spacing between the
respectively. In Fig. 6, we can identify two groups data packets. This con®rms that ACK compres-
of ACK interarrival times at y 0:34 ms and sion is the main reason for the low eective
y 1:19 ms. The interval between two data throughput observed. Further investigations and
packets sent back to back is given by studies of the MAC layer protocol help us deter-
mine that the ACK-compression identi®ed in this
case is the result of the grant allocation mechanism
8
that is used to send ACK packets at the MAC
7 layer. The interval between two ACKs is equal
6
to ACK size=Rateupstream , where ACK size
NDataSlot SizeDataSlot . Values of inter ACK spacing
equal to 2 64 8=3000 0:34 ms (Figs. 6 and
Inter ACKs arrival (ms)
Fig. 7. ACK interarrival time with RR HE grant scheduling, gorithm introduces spacing during scheduling.
upstream oered load 30%. This explains why better throughput is obtained in
314 O. Elloumi et al. / Computer Networks 21 (2000) 307±323
0.7
FCFS
spacing equal to ACK size=l2 where l2 is the up-
0.6
Round Robin
stream rate. For FCFS scheduling, the following
condition, Packet size=l1 > ACK size=l2 , guaran-
0.5 tees ACK compression. This condition is necessary
but not sucient for RR scheduling since in case of
multiple requests queued at the HE, the ACK
0.4
fraction
1 60
10% offered load
20% offered load
30% offered load
40% offered load
50% offered load 50
0.8
40
Effective throughput
0.6
RTT (ms)
30
0.4
20
0.2
10
0 0
40 60 80 100 120 140 160 2 4 6 8 10 12 14
EPD threshold (cells) time (s)
Fig. 11. TCP eective throughput vs EPD threshold, FCFS HE Fig. 13. RTT with RR, EPD threshold 160 cells, upstream
grant allocation and piggybacked requests. oered load 30%.
1 60
10% offered load
20% offered load
30% offered load
40% offered load
50% offered load 50
0.8
40
Effective throughput
0.6
RTT (ms)
30
0.4
20
0.2
10
0 0
40 60 80 100 120 140 160 2 4 6 8 10 12 14
EPD threshold (cells) time (s)
Fig. 12. TCP eective throughput vs EPD threshold, RR HE Fig. 14. RTT with RR piggybacking, EPD threshold 160
grant allocation and piggybacked requests. cells, upstream oered load 30%.
oered load comparable to the simulations per- that RR scheduling gives better performance re-
formed in Section 5, we only use piggybacking for sults than FCFS scheduling. We attribute this to
the transmission of TCP ACK packets (i.e. not for the spacing introduced by the RR scheduling that
background trac). As a direct result of piggy- increases the probability to send requests for ad-
backing the eective throughput increases for both ditional ACK transmissions in every DS granted.
FCFS and RR scheduling (especially for EPD In Figs. 15 and 16 the ratio of successful piggy-
threshold values larger than 100 cells). backed requests over the total number of requests
To illustrate this improvement we plot the RTT is shown for FCFS and RR scheduling, respec-
measured at the TCP sender for 30% oered load tively. As the EPD threshold in the forward
with and without piggybacking in Figs. 13 and 14, direction is increased, the ratio of successful re-
respectively (with EPD threshold set to 160 cells). quests for both scheduling algorithms increases as
We ®rst observe that the RTTs obtained are con- well. Thus a larger bottleneck buer enables larger
siderably reduced. We also compute the standard TCP congestion window values and leads to a
delay deviation with and without piggybacking to continuous TCP data ¯ow. However, with FCFS
be equal to 1:38 and 4:17 ms, respectively. We note scheduling we observe that the ratio of successful
O. Elloumi et al. / Computer Networks 21 (2000) 307±323 317
1 0.7
10 % offered load FCFS
20 % offered load Round Robin
0.9 30 % offered load
40 % offered load 0.6
50 % offered load
0.8
0.7 0.5
0.6
0.4
fraction
Ratio
0.5
0.3
0.4
0.3 0.2
0.2
0.1
0.1
0 0
40 60 80 100 120 140 160 0 1 2 3 4 5 6 7 8
EPD threshold (cells) Inter ACKs arrival (ms)
Fig. 15. Ratio of successful piggybacked requests over the total Fig. 17. Distribution of ACK interarrival time for FCFS and
number of requests, FCFS HE grant allocation. RR, upstream oered load 30%.
1
10 % offered load
the contention percentage in the case of multiple
20 % offered load
30 % offered load
40 % offered load
TCP sources and gives a shorter upstream delay.
0.8
50 % offered load
However, the throughput for 50% oered load
remains low due to the relatively large delay in-
0.6
curred. In Fig. 17, we plot the interarrival of TCP
ACKs. Compared to Fig. 8, we note a reduction in
Ratio
0.6
results, compare the two solutions and discuss
feasibility issues.
Dynamic rate tracking algorithm. The algorithm 0.4
sured l1 :
lp l1 =Packet size: 0
40 60 80 100
EPD threshold (cells)
120 140 160
Dierent methods may be used to estimate the Fig. 19. TCP eective throughput vs EPD threshold, RR HE
bottleneck rate. Some have been already proposed grant allocation, minimum spacing.
in rate based ¯ow control algorithms [12,17]. We
use a very simple method well adapted to TCP spacing algorithm given. We note that it improves
behavior in SlowStart and congestion-avoidance TCP performance for dierent oered loads. It
working regions. It measures the minimal interar- prevents premature and frequent packet losses due
rival time s between two back to back packets to to ACK compression. As expected, no improve-
decide of the ACK sending rate (spacing). Since ment can be achieved beyond a load of 50%. This
each ACK corresponds to two ATM cells (for is due to the large bandwidth delay product in-
Classical IP over ATM), the requested spacing is curred at this load. In order to validate the choice
then set to s=2. This is adapted to TCP, since it of the minimal interarrival time between two
usually sends packet pairs in both SlowStart and consecutive ACKs, we did simulations using the
congestion-avoidance regions. We can measure average and the maximum interarrival values.
and always use the shortest interarrival time be- Results are given in Figs. 20 and 21, respectively.
tween two consecutive packets. The pseudo code Although TCP performance is improved with a
for the proposed algorithm is as follows (Fig. 18): mean value based estimation, it degrades for the
Simulation results. Fig. 19 gives the eective maximum value based estimation. Minimum
throughput of TCP over HFC using the ACK based estimation proves to be optimal in our case.
O. Elloumi et al. / Computer Networks 21 (2000) 307±323 319
1
10% offered load
In Fig. 22, we plot the distributions of ACK in-
terarrival times for the minimum, mean and
20% offered load
30% offered load
40% offered load
0.6
Fig. 21. TCP eective throughput vs EPD threshold, RR HE rithm prevents from TCP timeouts for the three
grant allocation, mean spacing.
1
min
max
mean
0.8
0.6
fraction
0.4
0.2
0
0 1 2 3 4 5 6 7 8
Inter ACKs arrival (ms)
Fig. 23. Comparison of piggybacking and ACK spacing with
Fig. 22. Distribution of ACK interarrival time for ACK dierent bottleneck rates: 1.5, 2, 2.5 Mbits/s for 20% oered
spacing algorithm. load, EPD threshold 100 cells.
320 O. Elloumi et al. / Computer Networks 21 (2000) 307±323
bottleneck rates. For piggybacking, while the vestigated the eect of piggybacking on TCP per-
performance is similar to ACK spacing when the formance and found that it reduces the delay and
bottleneck rate is under 2.5 Mbits/s, a lot of the delay variation of TCP ACKs. As a result TCP
timeouts are observed when the bottleneck rate is eciency is improved in all cases. However, pig-
set to 2 and 1.5 Mbits/s. For 1.5 Mbits/s the gybacking can still result in bad performance when
throughput degradation is very visible (large pe- the data path capacity is small.
riods of inactivity). The spacing introduced by Finally, an algorithm for ACK spacing is pro-
piggybacking does not always match the bottle- posed and is shown to give optimal performance
neck rate, and ACKs can be subject to compres- for dierent oered loads and network buer sizes.
sion. This algorithm is very simple and aims at con-
Feasibility and implementation. As far as feasi- serving the ACKs interarrival time, to respect the
bility and implementation are concerned, we be- TCP self-clocking mechanism. This algorithm can
lieve that three conditions are necessary to ensure be generalized for other applications, since even if
proper operation of the proposed algorithm. First packets are subject to delay (due to grant re-
the HE grant allocation algorithm must be able to quests), it is desirable to conserve, at the MAC
schedule grants for ACK transmissions at a con- layer, the spacing introduced by the applications.
stant rate. This can be easily achieved since HE However, since the MAC layer can handle packets
scheduling algorithms are expected to provide from dierent ¯ows, conserving the delay between
constant bit rate (CBR) service. The second con- packets on a per-¯ow basis is a complex problem,
cern is the value of the clock granularity. As and is in our opinion a good research area in the
pointed out in [18], clock granularity can bias the case of HFC networks. In practice, in the case of
estimation of the interarrival of consecutive residential access, it is expected that only one
packets or ACKs. This problem is likely to happen network application is used in each terminal.
in a high speed network environment. However, in In this paper we studied TCP eciency, how-
this case, the MAC layer has a very ®ne timer ever, we believe that more studies need to be led in
granularity: depending on the upstream bit rate, order to investigate other issues such as TCP
minislot intervals could be in the order of few ls. fairness and delay over HFC. Although, we used
Finally, the algorithm presented can be imple- TCP-RENO in our simulations to re¯ect the large
mented using an additional option in the HFC majority of TCP/IP stacks, it may be interesting to
MAC layer that permits to specify an interval for examine the performance of dierent TCP algo-
transmitting data. This is achieved at the cost of a rithms, such as SACK-TCP and New-Reno. These
slight increase in the MAC grant request PDU algorithms are known to give better performance
size, namely an 8 bit ®eld containing the trans- in the case of frequent packet losses. Our simula-
mission rate. tions were made using a single TCP connection
since our goal was to focus on the eect of the
MAC protocol on TCP performance rather than
7. Concluding remarks to study the interaction between dierent TCP
connections. Our future work will also include the
This paper gives performance results and im- interaction between multiple TCP connections.
provements of the TCP protocol over HFC net-
works. First, we have shown by means of
simulations that poor TCP performance is ob- Appendix A. Analysis of TCP behavior
served due ACK compression. We compared the
performance of FCFS and RR HE scheduling al- In this appendix we examine TCP congestion
gorithms and found that RR scheduling results in window and SW1 buer (corresponding to the
better performance due to the ``natural'' spacing bottleneck link) dynamics for 30% of the upstream
introduced on successive TCP ACKs. This reduces oered load using dierent MAC layer algorithms:
the ACK compression behavior. Second, we in- (1) ``pure RR'', (2) RR piggybacking and, (3)
O. Elloumi et al. / Computer Networks 21 (2000) 307±323 321
30000
the MAC layer leading to additional packet losses.
This results in setting the congestion window and
25000 the slow start threshold to a fourth of its initial
value (when the ®rst packet loss is detected). As
congestion window size (Bytes)
20000
shown in [4], this slows down the TCP connection
since TCP operates in the congestion avoidance
phase with a very small congestion window. In
15000
10000
Fig. 25, we note that the frequency of the buer
occupancy cycles is high. The queue size drops
5000 frequently to 0, due to the size of the window that
is frequently reduced to half or fourth of its size
0
7 8 9 10 11 12 and never reaching its optimal value. In Fig. 26, we
time (s)
plot the dynamics of the congestion window using
Fig. 24. TCP congestion window for 30% upstream oered piggybacking. Timeouts are less frequent than
load.
those observed in Fig. 24. This explains why pig-
gybacking improves TCP performance by reduc-
RR ACK spacing. In Fig. 24 we identify two
pathological problems with the TCP congestion
window. We note that TCP suers from frequent 30000
retransmit of a lost packet, multiple ACK packets Fig. 26. TCP congestion window for 30% upstream oered
are received in a burst due to grant compression at load, piggybacking.
180
180
160
160
140
140
120
queue size (cells)
120
100
queue size (cells)
100
80
80
60
60
40
40
20
20
0
7 8 9 10 11 12
0 time (s)
7 8 9 10 11 12
time (s)
Fig. 27. SW1 buer dynamics for 30% upstream oered load,
Fig. 25. SW1 buer dynamics for 30% upstream oered load. piggybacking.
322 O. Elloumi et al. / Computer Networks 21 (2000) 307±323
30000
References
25000
[1] J.C. Bolot, Charaterizing end-to-end packet delay and loss
in the Internet, Journal of High-Speed Networks 2 (3)
congestion window size (Bytes)
(1993) 305±323.
20000
[2] R. Cohen, S. Ramanathan, TCP for high performance in
hybrid ®ber coaxial broad-band access networks, IEEE/
15000
ACM Transactions on Networking 6 (1) (1998) 15±29.
[3] O. Elloumi, H. A®®, M. Hamdi, Improving congestion
10000
avoidance algorithms in asymmetric networks, in: Pro-
ceedings of IEEE ICC '97, Montreal, June 1997.
5000
[4] K. Fall, S. Floyd, Simulation-based comparisons of Tahoe,
Reno and SACK TCP, Computer Communications Re-
0
7 8 9 10 11 12
view 26 (3) (1996) 5±21.
time (s)
[5] J.C. Hoe, Improving the startup behavior of a congestion
Fig. 28. TCP congestion window for 30% upstream oered control scheme for TCP, in: Proceedings of ACM SIG-
load, ACK spacing. COMM'96, August 1996, Stanford, CA, pp. 270±280.
[6] IEEE 802.14 Working Group, Media Access Control,
IEEE Draft Std. 802.14, Draft 2 R2, October 1997.
180
[7] N. Golmie, A. Koenig, D. Su, The NIST ATM Network
160 Simulator, Operation and Programming, Version 1.0,
NISTIR 5703, March 1995.
140
[8] N. Golmie, S. Masson, G. Pieris, D. Su, Performance
120 evaluation of MAC protocol components for HFC net-
works, in: Proceedings of the International Society for
queue size (cells)
100
Optical Engineering, Photonics East'96 Symposium, 18±22
80 November 1996, Boston, MA (also appeared in Computer
Communication, June 1997).
60
[9] N. Golmie, S. Masson, G. Pieris, D. Su, Performance
40 evaluation of contention resolution algorithms: Ternary-
20
tree vs p-persistence, IEEE 802.14 Standard Group, IEEE
802.14/96-241, October 1996.
0
7 8 9 10 11 12
[10] N. Golmie, M. Corner, J. Liebherr, D. Su, Improving the
time (s)
eectiveness of ATM trac control over hybrid ®ber-coax
Fig. 29. SW1 buer dynamics for 30% upstream oered load, networks, in: Proceedings of IEEE Globecom 1997, Phoe-
ACK spacing. nix, AZ.
[11] V. Jacobson, Congestion avoidance and control, in:
Proceedings of ACM SIGCOMM'88, August 1988, pp.
ing the delay and the delay variation in the back- 314±329.
ward direction. However, the congestion window [12] R. Jain, Rate based ¯ow control, in: Proceedings of M
SOMM'96, Aust b.c. 22, Leonides, Gr, pp. 270±280.
is still at half of its optimal size. The study of the
[13] L. Kalampoukas, A. Varma, K.K. Ramakrishnan, Two-
buer behavior, in Fig. 27, reveals less burstiness way TCP trac over ATM: eects and analysis, in:
than with ``pure RR''. However, the queue size still Proceedings of Infocom'97, April 1997, Kobe, Japan.
drops down to 0 frequently due to the premature [14] T.V. Lakshman, U. Madhow, B. Suter, Window-based
reduction of the congestion window. error recovery and ¯ow control with a slow acknowledg-
ment channel: a study of TCP/IP performance, in: Procee-
Fig. 28 gives the dynamics of the congestion
digns of Infocom'97, April 1997, Kobe, Japan.
window when using the ACK spacing algorithm [15] J. Postel, Transmission control protocol, Request for
speci®ed in Section 6. The congestion window Comment 793, DDN Network Information Center, SRI
reaches higher value than with ``pure RR'' and International, September 1981.
``RR+ piggybacking'' (Figs. 24 and 26). Further- [16] A. Romanow, S. Floyd, Dynamics of TCP trac over
ATM networks, IEEE Journal on Selected Areas in
more, since there is no ACK compression, time-
Communications 13 (4) (1995) 633±641.
outs are avoided. The corresponding buer [17] S. Keshav, A control theoretic approach to ¯ow control,
dynamics in Fig. 29 exhibit a rather stable periodic in: Proceedings of ACM SIGCOMM'91, September 1991,
behavior where the buer size is rarely equal to 0. Durich, Switzerland, pp. 3±15.
O. Elloumi et al. / Computer Networks 21 (2000) 307±323 323
[18] V. Paxson, Measurements and analysis of end-to-end Hossam A®® has graduated from Cai-
Internet dynamics, Ph.D. Thesis, LBNL-40319, UCB// ro University. He obtained the DEA
CSD-97-945, University of California, Berkley. and Ph.D. from University of NICE ±
France in the INRIA laboratories.
[19] W.R. Stevens, TCP/IP Illustrated, vol. 1, Addison-Wesley, After a Post Doc. in Washington
Reading, MA, 1994. University, St. Louis, he joined the
[20] W.R. Stevens, TCP slow start, congestion avoidance, fast ENST Bretagne, Rennes ± France as
retransmit, and fast recovery algorithms, Request for assistant professor. He obtained his
tenure in September 1999 and he is
Comments 2001, January 1997. now involved in Internet telephony
[21] L. Zhang, S. Shenker, D.D. Clark, Observations on the protocols and performance evaluation
dynamics of a congestion control algorithm: the eects of for ®xed and mobile infrastructures.
2-way trac, in: Proceedings of ACM SIGCOMM'91,
September 1991, Zurich, Switzerland, pp. 133±147.
David Su is the manager of the High
Omar Elloumi has obtained the engi- Speed Network Technologies group of
neering degree in Computer Science the Information Technology Labora-
from the Ecole Nationale des Sciences tory at the National Institute of Stan-
de l'Informatique, Tunisia and the dards and Technology. His main
Ph.D. from the University of Rennes I, research interests are in modeling,
France in 1995 and 1999, respectively. testing, and performance measurement
During his Ph.D. studies he was with of communications protocols. He has
the Networks and Multimedia De- been involved in modeling and evalu-
partment of the Ecole Nationale ation of protocols as they are being
Superieure des Telecommunications de developed by standardization organi-
Bretagne working on architectural and zations. These include protocols for
performance aspects of TCP/IP and the asynchronous transfer mode
ATM integration. Since April 1999, he (ATM) networks, hybrid ®ber-coaxial
has been a member of the Trac and networks, optical networks, and pico-cell wireless networks. He
Routing Technologies project, Network Architecture Depart- has also participated in the development of standard confor-
ment, Alcatel Corporate Research Center. His research interests mance test suites for testing of X.25, Integrated Services Digital
are in the area of trac analysis, QoS in the Internet and ¯ow Network (ISDN), Fiber Distributed Data Interface (FDDI),
admission control. and ATM network protocols. Before joining NIST in 1988, Dr.
Su was with GE Information Service Company as the manager
of internetworking software for support of GE's world wide
Nada Golmie received the M.S. degree data network. From 1973 to 1976, he was an Assistant Pro-
in Electrical and Computer Engineer- fessor in Computer Science at the Florida International Uni-
ing from Syracuse University, New versity in Miami, FL. Dr. Su received his Ph.D. degree in
York, in 1993 and the B.S. in Com- Computer Science from the Ohio State University in 1974.
puter Engineering from the University
of Toledo, OH, in 1992. Since 1993,
she is a research engineer in the High
Speed Networks Technologies group
at the National Institute of Standards
and Technology (NIST). Her research
interests include modeling and perfor-
mance evaluation of network proto-
cols, media access control, and quality
of service for ATM, IP, HFC, WDM
and wireless network technologies.