On 23/03/2012 6:31 p.m., Justin Lawler wrote:
Thanks Amos,
Should this affect performance of squid for clients using HTTP1.1? For instance:
Yes, performance should improve.
* will number of connections at any one time increase to squid& so we'll need to increase squid file descriptors?
Yes there will be more concurrent connections open. But... each will be
used for more than one request so this is not a bad thing. Whether you
need to increase available FD depends on too many things to say, that is
just another configuration file option anyway if you do.
* will the increase in connections affect the performance/capacity of squid?
Yes, req/sec capacity goes up.
* will it affect the network performance?
Yes. All TCP setup, teardown and time_wait overheads are eliminated for
pipelined requests. This is where that req/sec rate increase comes from.
I presume the user experience will benefit, with reduced loading time for pages with many images/etc. but in general once a browser has downloaded all the images/etc., will it then close the connection immediately, or wait 2 minutes to close?
It closes when one end or the other closes it, or the timeout happens.
It is not limited to one "page" though since HTTP has no concept of
pages. A user can browse an entire website and only use one connection
regardless of how many links they click on or what its scripts are doing.
The 2 minute timeout is configurable. see
http://www.squid-cache.org/Doc/config/persistent_request_timeout/
Amos