Search squid archive

Re: Advise about cache store in SMP mode, single disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos, (or do you prefer Jeffries?)
On 9/05/2014 6:29 a.m., fernando wrote:
I have a server configured to run in SMP mode with two cache stores: a
shared rock store and a dedicated aufs store for each worker. But I have
only one physical disk (actually a hardware raid).
RAID ? ... pretty much "dont".
I know... but right now I can't change that. :-( So I'm trying to do the best with what I have, and later try to build a new server more closely following best practices.


This does not change with SMP. I fact it gets worse as each worker will
be adding I/O contention at much higher overall rate than a single Squid
process could.
The problem is: we have lots and lots of acls, and cpu use is already at 100% during regular user load. Response time is good, but we can't add more acls (and other requested features, such as delay pools) because of the cpu bottleneck.

Yes, we do have some weird internet access rules. ;-)

[rock + aufs or rock + diskd?]
diskd is roughly equivalent to AUFS with one I/O thread. It will remove
the contention by removing cache_dir throughput capacity, and thus
limiting Squid traffic speed.
  So no diskd is not likely to be useful if your aim is high performance.
Right now I need to use more cpu. I know disk access will be worse, but so far everything tells it'll be acceptable.

I'm running CentOS 6.5 x86_64.

None of the UFS storage types are using SMP-aware code at present so
each SMP worker requires a unique cache_dir location for ufs, diskd, and
aufs caches.

So I understand diskd with SMP will spawn one diskd process for each worker, and so It won't be better than aufs, given that I do have a single (raid) disk. :-(

I was hoping that I'd get a single diskd process for all workers, and in my particular scenario this could be better than many aufs threads sharing the same physical disk.


Yes, in your situation CARP would the be somewhat equivelant to AUFS is
disk behaviour. Just using whole Squid processes instead of lightweight
threads.
I can't anymore. I need more cpu (more cores).


I would take a good look over that hardware RAID controller and see if
there is either a way to expose the underlying HDD as mounts for Squid
use (effectively disabling the RAID), or pin a particular Squid worker
process to a physical spindle (random guess at that even being possible).
The single raid set has the OS and cache_dir. I can't change that without reinstalling anew. And I can't reinstall right now. :-( I wan't the guy who installed this firsttime. I'm just expect to do miracles with minimum change to the hw/os setup. ;-)

You dont mention a Squid version, so another thing to look at might be
the upcoming large file support for rock caches in 3.HEAD packages. That
should (in theory a least) let you replace the AUFS dirs with rock dirs.
I'm following that. I'm using the latest 3.4.3 RPMs for CentOS by Eliezer.

Policy here won't allow me to try development releases, so I'd have to wait for a stable 3.5.x release.


[]s, Fernando Lozano





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux