On Tue, 3 Apr 2007, Shawn O. Pearce wrote: > Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, 3 Apr 2007, Shawn O. Pearce wrote: > > > If its the missing-object lookup that is expensive, maybe we should > > > try to optimize that. We do it enough already in other parts of > > > the code... > > > > Well, for all other cases it's really the "object found" case that is > > worth optimizing for, so I think optimizing for "no object" is actually > > wrong, unless it also speeds up (or at least doesn't make it worse) the > > "real" normal case. > > Right. But maybe we shouldn't be scanning for packfiles every > time we don't find a loose object. Especially if the caller is in > a context where we actually *expect* to not find said object like > half of the time... say in git-add/update-index. ;-) First, I truly believe we should have a 64-bit pack index and fewer larger packs than many small packs. Which leaves us with the actual pack index lookup. At that point the cost of finding an existing object and finding that a given object doesn't exist is about the same thing, isn't it? Optimizing that lookup is going to benefit both cases. Nicolas - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html