On Sat, 17 Mar 2007, Linus Torvalds wrote: > > (a) it probably doesn't really matter a lot (but I don't have the > numbers) Well, to some degree I obviously *do* have the numbers. I have the numbers that we used to re-generate the object data over five *hundred* times per object for some cases, and that I got the average such delta-base usage down from 20x to 1.1-1.3x depending on cache size. In contrast, the "use delta-base also for non-delta queries" fairly obviously cannot touch those kinds of numbers. We migth avoid a *few* object generation cases, but we're not looking at factors of 20 for any kind of sane cases. So I do think that a higher-level caching approach can work too, but it's going to be more effective in other areas: - get rid of some ugly hacks (like the "save_commit_buffer" thing I mentioned) - possibly help some insane loads (eg cases where we really *do* end up seeing the same object over and over again, perhaps simply because some idiotic automated commit system ends up switching between a few states back-and-forth). I really think the "insane loads" thing is unlikely, but I could construct some crazy usage scenario where a cache of objects in general (and not just delta bases) would work. I don't think it's a very realistic case, but who knows - people sometimes do really stupid things. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html