Search squid archive

Re: Re: What are recommended settings for optimal sharing of cache between SMP workers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/19/2014 02:08 AM, Rajiv Desai wrote:
> When the cache is nearing capacity, will the collision rate increase
> dramatically?

The answer depends on what collisions you are talking about:

* Hash collisions: Two different URLs having the same hash value. The
probability of such a collision depends on the hashing algorithm and its
parameters such as the number of possible hash values. An ideal
algorithm would yield a 1/n probability for n possible hash values. This
probability does not depend on the cache fullness ratio.

* Cache evictions: A new cache entry evicting an old cache entry. Even
with a perfect hash function, the probability of a cache eviction in a
full cache is 100%.



> Is there any special handling to reduce collisions and follow lru eviction?

Rock store does not support LRU today. A good implementation of a
lockless LRU algorithm would be welcomed, but it is far from trivial.
Rock focus is on large caches where LRU is probably less important as
discussed in the second bullet below.


There are several kinds of cache evictions folks may worry about,
including these two:

* A cache entry is evicted when the cache is not full: While annoying,
this is not really important in most cases because the cache will
eventually be full. In most cases, it is best to optimize the
performance of a full cache instead. This is the approach Rock
implementation uses (no need to spend cycles on hash chains and such
because we have to evict an entry anyway).

* A more "valuable" cache entry is evicted when there are less valuable
entries still in the cache: For most folks, entry "freshness" is its
value value (hence LRU), but there are other value metrics, of course.
These kind of evictions may decrease various hit ratios, especially for
small caches. The larger your cache is, the more garbage it stores
(regardless of the value metric), and the less is the probability that
something valuable will be evicted when the cache is in a steady (full)
state.


HTH,

Alex.



> On Tue, Feb 18, 2014 at 10:30 PM, Rajiv Desai <rajiv@xxxxxxxxxxxxxx> wrote:
>> On Tue, Feb 18, 2014 at 9:28 PM, Alex Rousskov
>> <rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
>>> On 02/18/2014 04:11 PM, Rajiv Desai wrote:
>>>
>>>> It seems like there is a bug where an object overwrites previously
>>>> written object:
>>>
>>> This is not really a bug -- hash collisions do happen, as you have
>>> discovered and Rock store resolves them by overwriting older entries if
>>> possible (or not caching the newer ones otherwise). The smaller your
>>> cache, the more collisions you will have for the same number of unique
>>> cache keys.
>>>
>>> How many total slots does your rock cache_dir have?
>>
>> 13107198 (200 GB with 16KB slots).
>>
>> I think I was thrown off by the hit % in mgr:info stats. Are these
>> stats accurate when using 8 SMP workers?
>> Is the % underreported? I couldnt find an absolute value of
>> hits/misses from stats. :/
>> Any pointers?
>>
>>>
>>>
>>>> Needs a better hash as pointed out by the TODO :)
>>>> I will send out a patch.
>>>
>>> Thank you. No hash function will eliminate collisions though. If
>>> somebody wants to eliminate side-effects of hash collisions on recently
>>> added objects, they would have to implement a different anchor
>>> allocation algorithm that resolves hash collisions through chains (for
>>> example). I doubt that would be easy though!
>>>
>>>
>>
>> Yes, I realized that as I read the code further. I will try to give it
>> a shot but I agree that this will be non trivial.
>> Currently sharding data across multiple cache_dirs will reduce
>> collisions, correct?
>>
>>>> Is the key guaranteed to be a null terminated string?
>>>
>>> No. The key is a 128-bit opaque buffer, essentially.
>>>
>>>
>>>> I intend to use std::tr1::hash
>>>
>>> Please keep in mind that the key is an MD5 sum already. See
>>> store_key_md5.cc. I am not a hashing expert by any means, but the MD5
>>> sum should not really need much further hashing.
>>>
>>> It is possible that the k[0]+k[1] sum in the code below should be
>>> removed in favor of either k[1] or k[0] alone, but again, this may not
>>> have any significant effect (except for any specific case where the two
>>> given URLs used to collide).
>>>
>>
>> I tried k[0] ^ k[1] which seemed to provide fewer collisions (67
>> collisions in 39129 GETs).
>> Haven't quantified and compared i detail though.
>>
>>
>>>
>>>> <code>
>>>> sfileno
>>>> Ipc::StoreMap::anchorIndexByKey(const cache_key *const key) const
>>>> {
>>>>     const uint64_t *const k = reinterpret_cast<const uint64_t *>(key);
>>>>     // TODO: use a better hash function
>>>>     return (k[0] + k[1]) % shared->limit;
>>>> }
>>>> </code>
>>>
>>>
>>> HTH,
>>>
>>> Alex.
>>>





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux