Search squid archive

Re: "Quadruple" memory usage with squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amos Jeffries wrote:
Robert Collins wrote:
On Wed, 2009-11-25 at 10:43 -0200, Marcus Kool wrote:

There are alternative solutions to the problem:
1. redesign the URL rewriter into a multithreaded application that


1b.
Kinkie has just reminded me about the helper muxer/demuxer he has been working on for the SMP support in Squid-3. This is still experimental at present but would also be worth a try if you are willing. He's posted it to squid-dev now.

It's a perl script that wraps a non-threaded helper binary and lets Squid talk threaded helper protocol to a bunch of them through a single muxer child.

* So Squid only forks once to start the muxer, and further forks/whatever are done inside the muxer. Hopefully with only its much smaller memory footprint, if the problem still exists at all down at that level.

* Usage is basically to set the relevant Squid concurrency value to the number of helpers the muxer is to run. Running M muxers at N concurrency level causes N*M helpers to be available in parallel.


2. redesign the URL rewriter where the URL rewriter rereads

3. modify the URL rewriter to accept multiple request

4. use less URL rewriters. You might get an occasional

5. Experiment with vfork.

-Rob

Any of the above plus:

Log daemon is overdue for a patch to make it cap file size and auto-rotate when the cap is hit. If you are able to assist with that development you can then use the logging daemon instead of expensive log rotates by squid itself.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux