On 31.01.2011 23:53, Amos Jeffries wrote:
On Mon, 31 Jan 2011 10:57:57 +0100, "Jack Falworth"<jackf.mail@xxxxxx>
wrote:
Hi squid-users,
I have a question regarding the TCP send/receive buffer size Squid uses.
For my high-performance setup I increased both buffer sizes on my Ubuntu
10.04 system. Unfortunately I found out that Squid 2.7 (as well as 3.x)
limits the receive buffer to 64K and the send buffer to 32K in the
configure.in script.
In addition I found this bug report regarding this check:
http://bugs.squid-cache.org/show_bug.cgi?id=1075
I couldn't really figure out the problem with Squid using higher buffer
sizes if it is the intention of the administrator to increase those
values.
This check was included in CVS rev. 1.303 back in 2005, thus it's quite
old.
Is this some legacy check or is it still important with today's systems?
Can I safely remove this check or will this have some side-effects, e.g.
say the some internal data structures won't be able to cope with higher
values?
Note that setting ONLY affects the TCP buffers so 64K worth of packets can
sit outside of Squid in the networking stack.
This has side-effects on the ACK packets. While they are waiting in that
buffer they are possibly ACKed but not actually received by Squid. If
anything causes Squid to stop, crash or slow down on its read()'s and
accept()'s the client can be left with incorrect information about the
state of those bytes.
But this could also happen on a 64K buffer. If Squid crashes or goes
down for some reason then information is lost anyways.
Thus the only reason increasing the buffer size in a high-traffic
scenario would be a bad idea is if Squid is somehow overloaded and
slowed down on its read()'s and accept()'s? But if I make sure that
Squid can handle some peak traffic values without being overloaded it
would be safe to increase buffer sizes?