Matus UHLAR - fantomas wrote:
On 21.01.09 10:05, Andreev Nikita wrote:
Squid cache resides on NFS partition on storage system. It is 1Gb link
and both sides are connected to the same switch. Squid has dual core
Xeon processor and 4GB of RAM.
The main concern here is that squid always eats 100% of 1 core. And
our clients can't reach full channel throughput (4Gbs). As I can see
outside link is half full. Secondly it looks like FS performance is
very poor. I tried to clear cache by setting
cache_swap_low 0
cache_swap_high 0
and it took about 15 hours for squid to actually clear the cache!
Yeah. Deleting a million objects over NFS is going to take a while.
Why does squid eat 100% of processor if the problem is in FS?
How is your cache_dir defined? aufs (in general) is a better choice
than ufs, diskd might still have some stability issues under load, and
coss is a good supplement as a small object cache. Conceivably if Squid
is set up with a ufs cache_dir mounted as NFS, it's spending a lot of
time in a wait state, blocked while the I/O completes.
Maybe it's not an FS problem at all?
I have to agree that using a cache_dir mounted as NFS is (at the very
least) part of the problem...
What can I do to find the performance bottleneck?
Use profiling tools like iostat, vmstat and top.
NFS is the bottleneck. Can you connect the disk on storage system directly?
Or at least use AoE* or iSCSI... NFS is convenient, but far from ideal
for a Squid cache. AoE and iSCSI are better**, but not neither will
perform as well as direct attached storage. The less latency between
Squid and the cache data, the better.
btw, which link is 1 gbit and which is 4 gbit? NFS is connected with 1gbit
and you want squid to be able to saturate 4gbit connection?
Can one Squid process even saturate a 4gbit link? In this case, given
the number of clients (~216) and the average number of requests per
minute (432.5) I'd have to guess we are talking about a 4mbit internet
connection.
From the cache_info dump provided, DNS lookups are taking nearly 200
ms. That's pretty slow too. Then again, I suppose if the Squid server
only has one Ethernet interface and that is saturated with NFS, DNS
queries are going to suffer.
Chris
*ATA over Ethernet (http://en.wikipedia.org/wiki/ATA_over_Ethernet)
**Just a gut feeling with no benchmarks to back it up, but I would think
that AoE would be better than iSCSI, as there is less protocol overhead.