Search squid archive

RE: Is my Squid heavily loaded?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 14 Mar 2011 18:12:27 +0530, Saurabh Agarwal wrote:
Thanks Amos. I will try doing those different sizes tests.

Some more observations on my machine. If I don't transfer those 200
HTTP Files for the first time in parallel but sequentially one by one
using wget and after this if I use my other script to get these 200
files in parallel from Squid then memory usage is allright. Squid
memory usage remains under 100MB. I think for the first time transfer
there is even more disk usage like save the files to disk and then
read all of them parallel from disk. Also I think there should be lots of socket buffer space being used as well by Squid for each client and
server socket.

Regarding cache_dir usage what do you mean by "one cache_dir entry
per spindle". I have only one disk and one device mapped partition
with ext3 file system.

The config file you showed had 3 cache_dir on that 1 disk. This is bad. Each cache_dir has N AIO threads (16, 32 or 64 by default IIRC) all trying to read/write from random portions of the disk. Squid and AIO scheduling does some optimization towards serialising access to the base disk, but that does not work well when there are multiple independent cache_dir state handlers.

Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux