time passed and an other thread states that FreeBSD developers
confirm a known issue with superpages and suggest vfork().
vfork() halts the parent while the child does not do its 26
calls to exec().
This time may be short enough for the parent to have a workaround
for the problem. Of course I am interested to see performance
numbers.
But forgive me for asking why option 5 is not considered.
All new information indicates that both Squid processes
will fork fast. Variations of options 5 may even give better
results; e.g.
- memfs,
- change size of mem_node to 4096 bytes (is it safe?)
- use alternative malloc implementation like TCMalloc which only
aligns chunks bigger than 32K on a page boundary
The size of the mem_node objects (4112 bytes) is definitely inefficient
and wastes too much memory to leave it unchanged. A memory allocater
that does not page-align these objects is the simplest rescue until
the Squid developers come with a solution.
Marcus
Linda Messerschmidt wrote:
On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
<marcus.kool@xxxxxxxxxxxxxxx> wrote:
The FreeBSD list may have an explanation why there are
superpage demotions before we expect them (when their are no forks
and no big demands for memory).
I think they are simply free()s since the squid was holding only 5mb
of unused memory at any time.
option 5. (multi-CPU systems only).
use 2 instances of Squid:
1. with null cache, small cache (e.g. 100 MB cache_mem),
16 URL rewriters and a Squid parent
2. a Squid parent with null cache and HUGE cache_mem
Both Squid processes will rotate/restart fast.
I think our "option 5" would be the 20GB memfs cache_dir solution, as
that also hacks around the "double allocation" issue.
But one way or the other there is some kind of bug here... squid
claims it is using X memory and it is really using 2X. Even if it is
only a display error and it really is using the memory, I would like
to know for certain the origin so I can move on knowing I tried my
best. :-)
Thanks!