Search squid archive

shared-memory cache in Squid3.2 and object size?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 







I was looking at logs/code or something and it seemed that if I wanted to use the multiple processes feature on a multi-core system, they need to use shared memory.

So I setup 8GB for them to share -- verified by looking in /dev/shm and seeing an
8GB file created by squid each time it starts.

Ahh.. in the config...this can't be true, can it? 32KB max object size?

(Normally my max object size is 512MB, --- my AVERAGE object size (as measured by files in my cache at one point, is 37KB -- but that was from 7 years ago, thought maybe 43KB might be closer now, with content inflation...but can't really tell...)

Since It seems squid may be limiting the max shared objects to 32KB...which is a
bit on the tiny side??

My system's largest page size is 1GB (Huge page support turned on).

So why a 32KB limit?   That really blows my idea of a fun setup
--- any benefit gained by as multi-processor squid will be lost on
losing object sharing...

Is this still the limit? Is it something that is planned to be increased anytime soon?

To prevent lock contention isn't is possible just to use alot of locks (like a hash of the URL)? You still get unique locks/object, but chances for contention would be small with a well designed and maybe dynamically configuring hash...(how many buckets (lock points) do you need to allow for each processor to be unlikely to collide, or at what level is contention considered a problem? Dunno if a user tunable
or an automatic prime number allocator would be most efficient.





Thanks,
Linda



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux