Search squid archive

Re: What is the max number of Squirm redirect_children?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 15 Nov 2011 16:59:46 +0100, Leonardo wrote:
Dear all,


As for the title question: You are the only one who knows that. It depends entirely on how much RAM your system has and how much is being used (by everything running). The number which can run on your system alongside Squid and the OS and everything else without causing the system to swap.


I have a Squid transparent proxy which uses 40 Squirm URL rewriter
processes.  Everything worked fine until today, when Squid crashed
after an error "179 pending requests queued":

2011/11/15 13:55:41| WARNING: All redirector processes are busy.
2011/11/15 13:55:41| WARNING: 40 pending requests queued
2011/11/15 13:55:41| Consider increasing the number of redirector
processes in your config file.
2011/11/15 14:05:41| WARNING: All redirector processes are busy.
2011/11/15 14:05:41| WARNING: 179 pending requests queued
<snip>
FATAL: Too many queued redirector requests
Squid Cache (Version 3.1.7): Terminated abnormally.

Please try a more recent 3.1 release. We have done a lot towards small efficiencies this year.


CPU Usage: 91414.241 seconds = 55144.038 user + 36270.203 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 8
Memory usage for squid via mallinfo():
	total space in arena:  1007116 KB
	Ordinary blocks:       1003347 KB 100368 blks
	Small blocks:               0 KB      1 blks
	Holding blocks:         27164 KB      8 blks
	Free Small blocks:          0 KB
	Free Ordinary blocks:    3768 KB
	Total in use:          1030511 KB 102%
	Total free:              3768 KB 0%


Restarting gave this error:

2011/11/15 14:14:02| WARNING: Disk space over limit: 11037556 KB >
10240000 KB
2011/11/15 14:14:02| IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory

Err. Connections apparently being dropped out of the NAT system records. If these continue that will need to be investigated. Since there seem to be only 60 it could just be a hiccup in the network traffic.


(... snipped 60 entries similar as the one above ... )
2011/11/15 14:14:05| WARNING: All redirector processes are busy.
2011/11/15 14:14:05| WARNING: 40 pending requests queued
2011/11/15 14:14:05| Consider increasing the number of redirector
processes in your config file.


I tried increasing redirect_children to 300 in squid.conf, but since
each children spawns as a process, this brought the system to its
knees.  Even with no active HTTP clients.
What is the max/reasonable number of Squirm redirect_children?

Check your cache_mem and normal RAM usage for Squid. The helper fork will consume that much virtual memory on each fork. A big jump like 40->300 is probably not a good idea, but smaller jump 40->100 maybe okay.


I'd also look at what Squirm is doing and try to reduce a few things ...
 * the number of helper lookups. With url_rewrite_access directive ACLs
* the work Squid does handling responses. By sending empty response back for "no-change", and using 3xx redirect responses instead of re-write responses.

You may also be able to remove some uses of Squirm entirely by using deny_info redirection.



These two lines also puzzle me.  I could rebuild the cache to get rid
of it, but I'd like to know why this occurred:

2011/11/15 14:14:02| WARNING: Disk space over limit: 11037556 KB >
10240000 KB
	Total in use:          1030511 KB 102%


I suspect it has something to do with >2GB files in the cache. The older Squid had some cache size accounting issues with individual files >2GB or total size >2TB doing 32-bit wrap and using way too much disk space.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux