Phillip Hallam-Baker wrote: > Today most Web browsers attempt to optimize download of images etc. by > opening multiple TCP/IP streams at the same time. This is actually done for > two reasons, first to reduce load times and second to allow the browser to > optimize page layout by getting image sizes etc up front. > > This approach first appeared round about 1994. I am not sure whether anyone > actually did a study to see if multiple TCP/IP streams are faster than one > but the approach has certainly stuck. > > But looking at the problem from the perspective of the network it is really > hard to see why setting up five TCP/IP streams between the same endpoints > should provide any more bandwidth than one. If the narrow waist is > observed, then the only parts of the Internet that are taking note of the > TCP part of the packet are the end points. So having five streams should > not provide any more bandwidth than one unless the bandwidth bottleneck was > at one or other endpoint. Your picture is completely missing a _very_ common usage scenario. Bandwidth is often irrelevant. It is often the "time-to-use" that counts. And with Web Browsers, it is extremely common that users navigate to a different page before the current page has completed loading. Just onle single HTTP/1.0 or HTTP/1.1 pipe requires total serialization of all objects that comprise a web page. And it starts with the main page. Think of a "huge" main page (200 KByte+) that contains a long list of things (like a catalog of an online shop or auction site), with graphical navigation buttons at the top of the page and for every list item and preview pictures. When the huge main page and the embedded elements are loaded in parallel, i.e. the browser establishes parallel connections to download embedded small images & icons after just 10% of the main page have been loaded, the browser can start rendering and making available to the user the top of the page *LONG* before the page has completed loading, and the user may actually navigate away from the page long before the loading of the page completes. And this is where serialized loading through just a single pipe would result in significantly higher waiting times for the interactive user of a browser. -Martin