Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > Actually, even in the normal workflow it will do all the extra unnecessary > work, if only because the lookup costs of *not* finding the entry. > > Lookie here: > > - git index-pack of the *git* pack-file in the v2.6/linux directory (zero > overlap of objects) > > With --paranoid: > > 2.75user 0.37system 0:03.13elapsed 99%CPU > 0major+5583minor pagefaults > > Without --paranoid: > > 2.55user 0.12system 0:02.68elapsed 99%CPU > 0major+2957minor pagefaults > > See? That's the *normal* workflow. Zero objects found. 7% CPU overhead > from just the unnecessary work, and almost twice as much memory used. Just > from the index file lookup etc for a decent-sized project. OK, but what about that case with unpack-objects? Didn't we there do all this work to also check for the object already existing? During update-index, write-tree and commit-tree don't we also do a lot of work (per object anyway) to check for a non-existing object? So even with --paranoid (aka what we have now) index-pack still should be faster than unpack-objects for any sizeable transfer, and is just as "safe". If its the missing-object lookup that is expensive, maybe we should try to optimize that. We do it enough already in other parts of the code... -- Shawn. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html