Re: Naive question on multiple TCP/IP channels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Feb 5, 2015 at 11:30 AM, Jim Gettys <jg@xxxxxxxxxxxxxxx> wrote:


On Thu, Feb 5, 2015 at 9:30 AM, Phillip Hallam-Baker <phill@xxxxxxxxxxxxxxx> wrote:


On Wed, Feb 4, 2015 at 6:54 PM, Brian E Carpenter <brian.e.carpenter@xxxxxxxxx> wrote:
On 05/02/2015 08:49, Eggert, Lars wrote:
> Hi,
>
> CC'ing tsvwg, which would be a better venue for this discussion.
>
> On 2015-2-4, at 20:22, Phillip Hallam-Baker <phill@xxxxxxxxxxxxxxx> wrote:
>>
>> Today most Web browsers attempt to optimize download of images etc. by opening multiple TCP/IP streams at the same time. This is actually done for two reasons, first to reduce load times and second to allow the browser to optimize page layout by getting image sizes etc up front.
>>
>> This approach first appeared round about 1994. I am not sure whether anyone actually did a study to see if multiple TCP/IP streams are faster than one but the approach has certainly stuck.
>
> There have been many studies; for example, http://www.aqualab.cs.northwestern.edu/publications/106-modeling-and-taming-parallel-tcp-on-the-wide-area-network

GridFTP only exists because of lots of experience that several parallel FTP
streams achieve better throughput than a single stream, especially on paths
with a high bandwidth-delay product. I'm guessing that since buffer bloat
creates an artificially high BDP, that could apply pretty much anywhere.

SCTP is not the only one-acronym answer: try MPTCP. The interesting thing there
is that because there is explicit coupling between the streams, the throughput
increases sub-linearly with the number of streams.

Buffer bloat is a great example of the consequences of the co-operative nature 
of the Internet for one of the audiences I am writing for (Folk trying to
make/understand Title II regulations).

The Internet is actually a demonstration that the commons are not such a tragedy 
after all.


 If we look at the problem of buffer bloat there are several possible solutions
and the problem is picking one:

* Persuade manufacturers to reduce buffer sizes so the old congestion
algorithms work.

* Change the congestion algorithm.

* Break with the pure end to end principle and have the middleboxen with the 
huge buffers do some reporting back when they start to fill.


The first is difficult unless you get control of the benchmarking suites that are
going to be used to measure performance. Which would probably mean getting
the likes of nVidia or Steam or some of the gaming companies on board.

​The missing metric is "latency under load".  Right now, we only measure bps.
 

The second is certainly possible. There is no reason that we all have to use the
same congestion algorithm. In fact a mixture might be beneficial. Instead of looking 
for packets being dropped, the sender could look at the latency and the difference
between the rate packets are being sent and the rate at which acknowledgements
are being received.

​Doesn't matter.  TCP congestion control has effectively been defeated by web browsers/servers.  Most of the data is transmitted in the IW's of n connections, where n is large.  The incentives are such that applications can/will abuse the network and not be "cooperative". I expect there will continue to be such applications, even if we "fix" the web to be better behaved; incentives are not aligned properly for cooperation.

So you can argue all you want about your favorite congestion control algorithm, and it won't solve this fundamental issue.​
 


OK yes, My Netflix download is going to kill your VOIP call.

Can't fix the queuing algorithm for just one interaction...


The reason I started pushing on this is that in the wake of Title II I am expecting a lot of people to be asking me to explain how the Internet worketh and this is precisely the sort of example that shows how (1) things are more complex than they appear, (2) how some sort of coordination is needed and (3) how the coordination needs to take less than five years to come to a decision.


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]