On Sat, 17 Mar 2007, Junio C Hamano wrote: > > This largely would depend on the project, but if a blob that is > cached is 20kB each, a 1024-entry cache would grow to 20MB. We > may need to introduce early eviction of cached objects with > total cache size limit, configurable per repository. One thing that I considered was to limit the delta-base cache to just tree entries. Those tend to be the really performance-sensitive ones - by the time you actually unpack blob entries, you're going to do something with that *single* entry anyway (like compare it to another blob), and the cost of unpacking the entry is likely to not be really all that noticeable. That said, it was just simpler to do it unconditionally, and it obviously *works* fine regardless of the object type, so limiting it to trees is a bit sad. And since the intensive tree operations tend to be in a separate phase (ie the commit simplification phase) from the the blob operations (say, doing "git log -p <pathspec>"), I suspect that the cache locality would still remain good. So I didn't do anything along the lines of "only cache for case Xyzzy". But yes, especially if a project has big blobs, it might make sense to limit by full size of the cached entries some way. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html