Search squid archive

28 cache_dirs - how many async io threads?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm tuning a large squid cluster reverse-proxy implementation, and I'm wondering what the experienced opinions are about the number of async io threads for 28 cache_dirs?

Some background on the testing cluster so far (spare hardware similar from my other production systems):

host machines:
3x dual xeon 3.0GHz E em64t 12GB RAM
2x quad opteron 248 32GB RAM
all broadcom bcm5704 dual Gbit NICs

storage:
each system = lsi megaraid320-2 (2ch ultra320), 2x dell powervault 220s, 14x 36gb 15k SCSI per powervault

5 systems in total. I've been trying different replacement policies, refresh_patterns, tuned the kernel's network params, memory sizes, etc. They're all running with async writes turned on.

It seems like I'm getting throughput of 500req/s at 100% IO load on the RAIDed systems. This is with a ~55% hit rate (very large library size). Each request is on avg 8kB with deviations of about 4kB +/-. I'd like to see how much I can get out of squid... I'm getting a NetCache unit in for an eval, so I can do some comparison.

SO - the Question - For storage, I've been doing dual 2 RAID10 (7x2) logical drives, limiting the used space to 20GB each, but for this experiment I'm trying out configuring it with 28 individual drives at 2GB cache_dirs each. For the RAID setup, 32 threads seemed to work the smoothest (compared to 26 and 40). Currently, the 28 drive system is running with 512 threads :o -- is this too much? I tried 64 previously and squid kept reporting IO overloading, pausing way too often to sync. I could just try everything, but it takes a while to get some comprehensive data (memory cache needs to fill up, etc).

Thanks in advance,


Aaron Chu



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux