Search squid archive

Re: Rock Store max object size 3.5.14

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/23/2016 12:11 PM, Heiler Bemerguy wrote:
> 
> Thanks Alex.
> 
> We have a simple cache_dir config like this, with no "workers" defined:
> cache_dir rock /cache2 80000 min-size=0 max-size=32767
> cache_dir aufs /cache 320000 96 256 min-size=32768

FWIW, I do not know whether aufs and rock play well together. YMMV.


> And we are suffering from a 100% CPU use by a single squid thread. 

Sustained 100% CPU load, especially lasting more than a second,
especially while Squid mostly stays in "user" space, is most likely a
Squid bug (including, but not limited to, severe performance bugs like
linear searches through very long lists).

A year or two ago, there was at least one such bug when handling large
responses. I do not know whether that bug has been fixed but I suspect
it has not been. I suspect there are bugs like that.

If you see these signs, report them and help triage/fix the problem.


> We
> have lots of ram, cores and disk space.. but also too many users:
> Number of clients accessing cache:      1634
> Number of HTTP requests received:       3276691
> Average HTTP requests per minute since start:   12807.1
> Select loop called: 60353401 times, 22.017 ms avg

> Getting rid of this big aufs and spreading to many rock stores will
> improve things here? I've already shrunk the acls and patterns/regexes etc

If you have beefy hardware, want to optimize performance, and are ready
to spend non-trivial amounts of time/labor/money doing that, then
consider the following rules of thumb:

1. Use the largest cache_mem your system can handle safely.
   Please note that Squid will not tell you when you over-allocate
   but may crash.

2. One or two CPU core reserved for the OS, depending on network usage
   levels. Use OS CPU affinity configuration to restrict network
   interrupts to these OS core(s).

3. One Rock cache_dir per physical disk spindle with no other
   cache_dirs. No RAID. Diskers may be able to use virtual CPU cores.
   Tuning Rock is tricky. See Performance Tuning recommendations at
   http://wiki.squid-cache.org/Features/RockStore

4. One SMP worker per remaining non-virtual CPU cores.

5. Use CPU affinity for each Squid kid process (diskers and workers).
   Prohibit kernel from moving kids from one CPU core to another.

6. Watch individual CPU core utilization (not just the total!). Adjust
   the number of workers, the number of diskers, and CPU affinity maps
   to achieve balance while leaving a healthy overload safety margin.

Disclaimer: The above general rules may not apply to your environment. YMMV.

Squid is unlikely to work well in a demanding environment without
investment of labor and/or money (i.e., others' labor). In many such
environments, Squid code changes are needed. Squid is a complex product
with many problems. If you want top-notch performance, there is no
simple blueprint. Getting Squid work well in a challenging environment
is not a "one weekend" project. Unfortunately.


HTH,

Alex.

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux