On 17/11/2011 1:42 p.m., RW wrote:
On Thu, 17 Nov 2011 12:44:32 +1300
Amos Jeffries wrote:
On Wed, 16 Nov 2011 22:31:21 +0000, RW wrote:
Years ago I read something about how memory cache performance
degraded
progressively with increasing object size, and that increasing
maximum_object_size_in_memory substantially could actually degrade
performance. Has this been fixed in both 3.x and 2.x?
Individual object size problems is not a limit on total RAM size
used by Squid or its memory cache. You can allocate many GB of RAM
cache then only store a few million<1KB objects in it.
Most of the the large object (up to 2GB) problems were solved in
3.0. The remainder (>2GB objects) were solved in 3.1.15.
That's not what I'm referring to. IIRC there were some tests that
showed that UFS (with OS-level disk-caching) outperformed memory
cache above a certain object size. I think the cut-off was well
under 100k.
You were referring I think to the old problem 2.x had iterating the full
length of each object on every write. Which does not affect 3.x.
When reading from disk, the disk supplies bytes sequentially and does
not need to iterate the length of it. So disk worked better in 2.x on
objects larger than the point where the CPU iteration work (slowing all
other requests down) and the disk I/O lag were balanced.
Amos