Search squid archive

Re: Large Buffers for Squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, but we have been running this way for some time now and it works VERY well for our needs. We have recently upgraded to the Gbit NICs and are running well, but would like to optimize things by cutting down on the syscall rate.

-mikep

Evan Klitzke wrote:

I think for larger files the need to use a caching server like Squid
is diminished because the total time that it will take to push the
data through the network will vastly outweigh the amount of time it
takes to access the file and do a disk seek. Especially for large
files, you'll get > 1Gbit of bandwidth from your disk/storage array
anyway. Still, if you want to cache such objects anyway and you have
enough RAM, you might be able to get away with just copying the files
to a RAM disk and then using a regular HTTP (or what have you) server
that accesses the RAM disk directly.

There are also a number of network parameters you'll want to look at
(especially TCP tuning) that are unrelated to file caching that you'll
want to look at to get optimal throughput.



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux