>This is how Rock store does it, essentially: Rock store index does not store the real location of the object on disk but computes it based on the hash value.< Sorry, then I misunderstood something, when reading some rock-code while ago. For me, in essence, it looked like, that for caching an object, rock picks one (or multiple, for large-rock) of the available "slots" for storage, and keeps the mapping hash-slot in the memory table. So, on restart, squid has to scan all slots from disk, to rebuild the table. Which means, the mapping URL-hash -> slot_# is _not_ fixed (predictable). >> Positive consequence: No rebuild of the in-memory-table necessary, as >> there >> is none. Avoids the time-comsuning rebuild of rock-storage-table from >> disk. >If you do not build the index, >you have to do a disk I/O to fetch the first slot of the candidate >object on _every_ request. Not necessarily to do a disk I/O, but to do an I/O. Still, underlying OS-buffering/blocking is happening. Besides, for a HIT you have to do the I/O anyway. So, the amount of "unnecessary disk-I/Os" would be the (squid-MISSes - not in OS/buffers residing disk-blocks). Which leads to a good compromise: Direct hashing would allow the slow population of the optional translation-table. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4665204.html Sent from the Squid - Users mailing list archive at Nabble.com.