Hey Amos,
There is one exception case which is not exactly like the description.
In a case you have a LAN connection with for example 1Gbps line.
The internet uplink is 20Mbps.
In this particular case when the client is using a Distributed FS or
Distributed requests such as in:
http:/server1.example.com/file1.tar.gz
http:/server150.example.com/file1.tar.gz
And in the above case the client is requesting from all the servers a
206 partial content request there is a possibility of accessing
server1.example.com = 192.168.0.100
server150.example.com = 192.168.0.101
While it is assumed that the client is accessing some origin server with
some "tricks" it is actually using local cache peers.
It can be assumed that the cache service provider will be using some CDN
network to allow the clients the download of the content in a very fast
speed.
It is however overloading the TCP level of the connection..
It can be assumed that it will not cause the upstream service providers
a load in the TCP level.
To design such a network there is a need in a very large amount of
resources but work of tens of years can achieve such a thing.
Eliezer
On 01/01/14 07:27, Amos Jeffries wrote:
NOTE: download managers which open parallel connections are*degrading*
the TCP congestion controls and reducing available network resources
across the Internet. Reducing their parallel requests to a single fetch
is actually a good thing.