Search squid archive

Re: "Quadruple" memory usage with squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
<marcus.kool@xxxxxxxxxxxxxxx> wrote:
> The FreeBSD list may have an explanation why there are
> superpage demotions before we expect them (when their are no forks
> and no big demands for memory).

I think they are simply free()s since the squid was holding only 5mb
of unused memory at any time.

> option 5.  (multi-CPU systems only).
> use 2 instances of Squid:
> 1. with null cache, small cache (e.g. 100 MB cache_mem),
>   16 URL rewriters and a Squid parent
> 2. a Squid parent with null cache and HUGE cache_mem
>
> Both Squid processes will rotate/restart fast.

I think our "option 5" would be the 20GB memfs cache_dir solution, as
that also hacks around the "double allocation" issue.

But one way or the other there is some kind of bug here... squid
claims it is using X memory and it is really using 2X.  Even if it is
only a display error and it really is using the memory, I would like
to know for certain the origin so I can move on knowing I tried my
best. :-)

Thanks!


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux