Re: [PATCH] Limit the size of the new delta_base_cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, 19 Mar 2007, Linus Torvalds wrote:
> 
> Which totally throws your argument out of the window. It's simply not true 
> any more: the cache will *not* be more effective the larger the objects 
> are, because you are ignoring the fact that adding a large object will 
> *remove* many small ones.

And btw, if you are adding a delta-base object that is bigger than 10%-20% 
(those particular numbers taken out of where the sun don't shine, but they 
are reasonable) of your cache size, you are pretty much *guaranteed* to be 
removing many small ones.

Why? 

The whole *point* of the delta-base cache is that it wants to avoid the 
O(n**2) costs of the long cache-chains. So a delta-base object on its own 
is almost not interesting for the cache: the cache comes into its own only 
if you have a *chain* of delta-base objects.

So rather than thinking about "one big object", you need to think about "a 
chain of objects", and realize that if one of them is big, then the others 
will be too (otherwise they'd not have generated a delta-chain in the 
first place).

So at an object size of 10-20% of the total allowed cache size, and a 
total cache _index_ of 256 entries, if you start adding big objects, you 
can pretty much be guaranteed that either

 (a) the delta-base cache won't be effective for the big object anyway, 
     since there's just one or two of them (and the cache size limit isn't 
     triggered)
OR
 (b) you start probing the cache size limit, and you'll be throwing out 
     many small object.

Finally, since the delta-chain cache is likely not as useful for blobs 
anyway (you don't tend to use tons of big related blobs anyway, and if you 
do, the real costs are likely the xdl diff generation between them rather 
than the object creation itself!), you're also likely optimizing the wrong 
case, since big delta-base objects are almost certainly goign to be blobs.

And no, I don't have any "hard numbers" to back this up, but if you want 
another argument, realize that delta chains are limited to a depth of ten, 
and if a *single* chain can overflow the delta-chain cache, then the cache 
ends up being 100% *in*effective. And do the math: if the object size is 
10-20% of the total allowed cache size, how many objects in the chain are 
you going to fit in the cache?

So there are multiple different reasons to think that big objects are 
actually *bad* for caching when you have a total size limit, not good.

		Linus
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]