Search squid archive

Re: Re: does rock type deny being dedicated to specific process ??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27/10/2013 8:01 p.m., Ahmad wrote:
hi amos ,
i read bad news about rock when rock shared between process , i read that it
reduce hit ratio !

i read form
http://wiki.squid-cache.org/Features/RockStore
it says :
/Objects larger than 32,000 bytes cannot be cached when cache_dirs are
shared among workers. Rock Store itself supports arbitrary slot sizes, but
disker processes use IPC I/O (rather than Blocking I/O) which relies on
shared memory pages, which are currently hard-coded to be 32KB in size. You
can manually raise the shared page size to 64KB or even more by modifying
Ipc::Mem::PageSize(), but you will waste more RAM by doing so. To
efficiently support shared caching of larger objects, we need to teach Rock
Store to read and write slots in chunks smaller than the slot size. /

as i understood , the max object size for disk caching will be 32 k ,
am i correct ???
i see it will be slow writing on hardisks an slow caching !

am i correct ???

*for things stored in the rock cache_dir* only. non-SMP cache_dir such as AUFS still cache larger items.

Im not sure if the imitation applied to memory cached objects, but when SMP is enabled that is likely as well.

If this is a critical issue for you please try out the large-rock experimental feature branch. It has changes which remove those limitations and also includes collapsed-forwarding port from squid-2.7 to allow backend fetches to be HIT on before they have finished arriving.

thats why i want to give each hardisk of my disks  of rock type a single
process !!

The limitation applies to rock storage type regardless of SMP sharing. It is designed to work with in those same limits.

=================
also , im not understating u here ,
* If you use ${process_number} or
${process_name} macros these channels are never setup and things WILL break.
*

????

With three workers and a rock cache there are actually 5 processes running:

kid1 - worker #1  ... ${process_number} = 1
kid2 - worker #2  ... ${process_number} = 2
kid3 - worker #3  ... ${process_number} = 3
kid4 - rock disker #1 ... ${process_number} = 4
kid5 - coordinator  ... ${process_number} = 5

[I'm not completely sure of remembering order between coordinator and disker, may be the other way around].

If you configure each process to access a different FS directory name for the rock dir. You end up with disker creating a rock DB at /rock4 when the backend workers trying to use /rock2, and /rock3. The coordinator thinks the rock dir exists at /rock5. * None of the processes will accept SMP packets about altering or fetching rock dir contents in an area they are not configured to use. * the workers will try to connect to diskers setup for the /rock2 and /rock3 - which do not exist. This is the shm_open connection error you see. * the other /rock4 message is disker or coordinator trying to open to receive messages.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux