On 26/10/2013 4:28 a.m., Omid Kosari wrote:
Alex Rousskov wrote
On 10/24/2013 07:43 AM, Omid Kosari wrote:
"digest_generation off" temporary solved problem but needs restart . I
have
tested with reload before .
Sounds like you have detected the source of the blocking Squid problem
and confirmed it! The fact that digest generation makes your Squid
unresponsive is essentially a Squid bug, but you might be able to help
Squid by adjusting its configuration. If you want to try it, I suggest
re-enabling digest generation and setting
digest_rebuild_chunk_percentage to 1.
If that works around your problem, great. Unfortunately, I suspect Squid
may still be unresponsive during digest generation time because of how
regeneration steps are scheduled and/or because of an old bug in the
low-level scheduling code of "heavy" events. However, using the minimum
digest_rebuild_chunk_percentage value (1) is still worth trying.
Tried . Unfortunately problem persists .
Alex Rousskov wrote
But i could not use digest benefits anymore . Is there big penalty if
both
caches are in same gigabit switch ?
The digest optimization can be significant if the two Squids share a lot
of popular content _and_ it takes a long time for each Squid to get that
content from the internet during a miss.
HTH,
Alex.
I mean if disable digest and just use HTCP .
That depends strngly on your traffic level and cache size.
HTCP uses bandwidth per-request, but no extra memory and no CPU
digesting activity.
Digest uses only periodic digest exchange bandwidth, but extra memory
and CPU generating the digests.
So if you have cache relatively smaller than traffic, digest works
better. Low traffic and huge cache size HTCP works better. It will be an
interesting study point to graph those and figure out what bandwidth and
cache sizes the swapover happens, I dont have anything beyond theory
about it though sorry.
Amos