hi amos , i read bad news about rock when rock shared between process , i read that it reduce hit ratio ! i read form http://wiki.squid-cache.org/Features/RockStore it says : /Objects larger than 32,000 bytes cannot be cached when cache_dirs are shared among workers. Rock Store itself supports arbitrary slot sizes, but disker processes use IPC I/O (rather than Blocking I/O) which relies on shared memory pages, which are currently hard-coded to be 32KB in size. You can manually raise the shared page size to 64KB or even more by modifying Ipc::Mem::PageSize(), but you will waste more RAM by doing so. To efficiently support shared caching of larger objects, we need to teach Rock Store to read and write slots in chunks smaller than the slot size. / as i understood , the max object size for disk caching will be 32 k , am i correct ??? i see it will be slow writing on hardisks an slow caching ! am i correct ??? thats why i want to give each hardisk of my disks of rock type a single process !! ================= also , im not understating u here , * If you use ${process_number} or ${process_name} macros these channels are never setup and things WILL break. * ???? regards ----- Dr.x -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/does-rock-type-deny-being-dedicated-to-specific-process-tp4662919p4662938.html Sent from the Squid - Users mailing list archive at Nabble.com.