RE: Westwood performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All,

This is all very useful information.  I wonder then if somebody could
quantify the environment in which TCPW/Westwood+ should excel best in.
Based on the papers I referenced, I felt my BDP was large enough, and
the file size I've been using has been 1M, 3M and 32M ... All of which I
think should give the optimal environment for performance gain.  So in
an "ideal" Westwood world, what would be:

The file size in a transfer?
The BDP?

And what would be the lowest value for each where one could still expect
to see some improvement?


Tx,
Adam

> -----Original Message-----
> From: Daniele Lacamera [mailto:root@xxxxxxxxxxxxxx] 
> Sent: Tuesday, August 30, 2005 11:35 PM
> To: Angelo Dell'Aera
> Cc: Lewis Adam-CAL022; linux-net@xxxxxxxxxxxxxxx
> Subject: Re: Westwood performance?
> 
> 
> On Wednesday 31 August 2005 02:12, Angelo Dell'Aera wrote:
> 
> > TCP Westwood+ bandwidth estimation is
> > done through the use of a low-pass filter which requires 
> few RTTs for 
> > obtaining the right value for the estimation. This is a 
> transient and
> we
> > simply can't avoid it. If the data transfer ends before the 
> end of the 
> > transient you're simply not testing Westwood+.
> 
> We noticed about this Westwood+ dependency on connection 
> duration about 
> one year ago. It seems that westwood+ needs more than few 
> RTTs to fully 
> reach its benefits, and probably this could be a key subject 
> for future 
> researches.
> 
> > Moreover, as a general way of proceeding, I think that running few 
> > different TCP congestion control algorithms for just few 
> RTTs and then 
> > comparing the results is not a correct way to proceed.
> 
> Absolutely. Also, I think that an accurate analysis of 
> behavior for TCP 
> variables during connection is required to understand how algorithms 
> are performing.
> 
> > 
> > In the first phase of a TCP connection it's not possible to 
> know how 
> > large is the pipe and the Reno slow start was designed in that way
> just
> > for this reason (obtaining an estimation of the capacity of the pipe
> as
> > soon as possible). This is a blind phase and IMHO no algorithm could
> be
> > designed to be better than others during this phase.
> 
> This is not true in every scenario. TCP Hybla for example, is 
> giving the 
> congestion window an "acceleration" during its slow start 
> phase that is 
> proportional to the round trip time, since the cwnd growth is clocked 
> by acks, and newreno is not caring of connections having very 
> different 
> RTTs. However, this makes sense only for "long & fat" pipes, like 
> satellite connections, where the RTT difference with terrestrial TCP 
> traffic sharing the same link is the main cause of penalties.
> 
> 
> Regards,
> 
> -- 
> Daniele Lacamera
> root{at}danielinux.net
> 
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux