Dave,
Yes, absolutely setting a small max_size in memory is the right
approach. This then lets the kernel consume the available main memory
for i/o buffers which then suffices quite well for the in-memory cache
which gets managed by the OS. We have used this technique for years and
get great performance from it. We would like to now experiment with
tuning the i/o buffer sizes to minimize the read/write system calls. As
a point of clarification we are running a Sun X64 Solaris 10 box, not Linux.
We are running 3 Gbit NICs right now at about 80% of peak when the
object is in memory from a single squid.
-mikep
Dave Dykstra wrote:
In my performance optimizations of squid I didn't see any benefit to
increasing Linux kernel network buffers. Those are mostly useful for
high-latency (long distance) connections, and I was concentrating on
high speed LAN accesses. I did see a huge increase in performance by
making sure that squid's maximum_object_size_in_memory was small; I set
it at 128KB. The Linux filesystem cache, which as far as I know can
take advantage of all available memory automatically, is much faster
than squid's memory cache for large and even moderately sized objects.
How much throughput are you able to get through the 4 Gbits of network
connections with a single squid?
- Dave Dykstra
On Mon, Jun 11, 2007 at 06:13:32PM -0700, Michael Puckett wrote:
My squid application is doing large file transfers only. We have
(relatively)few clients doing (relatively)few transfers of very large
files. The server is configured to have 16 or 32GB of memory and is
serving 3 Gbit NICs to the clients downstream and 1 Gbit NIC upstream.
We wish to optimize the performance around these large file transfers
and desire to run large I/O buffers to the networks and the disk. Is
there a tunable buffer size parameter that I can set to increase the
network and disk buffer sizes?
Regards
-mikep