Re: [PATCH] Limit the size of the new delta_base_cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 19 Mar 2007, Linus Torvalds wrote:

> 
> 
> On Mon, 19 Mar 2007, Shawn O. Pearce wrote:
> >
> > Round two, based on comments on IRC from Junio:
> 
> One more change: please don't even *add* objects to the cache that are 
> bigger than 10-20% of the cache limit!
> 
> If you start adding big objects, and you have some cache limit, that will 
> make the cache seriously less useful by causing eviction of potentially 
> much more interesting objects, and also obviously causing the cache code 
> itself to spend much more time picking objects (since it's enough to have 
> just a few big objects to make the cache eviction decide it needs to 
> evict).
> 
> Limiting by size is also effective since anything that is more than a 
> megabyte is likely to be a blob anyway, and thus much less useful for 
> caching in the first place. So there are just tons of reasons to say 
> "don't even add it in the first place" if you decide to go with any limit 
> at all.

On the other hand......

Either you parse blobs or you don't.  If you only do logs with path 
limiting then you won't add blobs to the cache anyway.

If you do end up adding blobs to the cache that means you have blob 
deltas to resolve, and even that operation should benefit from the cache 
regardless of the object size.

In fact, the bigger is the object, the more effective will be the cache.  
Because you certainly don't want to have a complete breakdown in 
performance just because a blob just crossed the 20% treshold.

And because we usually walk objects from newest to oldest, and because 
deltas are usually oriented in the same direction, we only need to tweak 
the current eviction loop a bit so on average the oldest objects are 
evicted first so next time around the current base will still be there 
for the next delta depth.  Given the nature of the hash containing the 
object's offset that means starting the loop at the next entry index 
instead of zero which should do the trick pretty well.

Of course if you end up in a condition where you have to prune the cache 
continuously, you'll spend more cycles picking up the object to evict, 
but it is likely to be so much less work than reintering that O(n!) 
behavior with deflate we had without the cache, and even worse since we 
mean big objects in this case.

So I wouldn't add any rule of that sort unless it is actually proven to 
be bad.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]