On Thu, 2010-07-08 at 07:28 -0400, Jeff King wrote: > On Tue, Jul 06, 2010 at 12:53:36PM -0400, tytso@xxxxxxx wrote: > Whatever we do with the optimization, I do agree with your suggestion at > least for "git commit" to avoid making such commits. Rejecting them > during fetchs and pushes would be a nice, too, but should probably just > be a warning at first, in case you have to pull from somebody with an > older git...[snip] > Yeah. I think the real question is what the default for that parameter > should be: pessimistic but always correct, optimistic but possibly > incorrect in the face of skew, or auto-tuned per-repository. > -Peff I think these two go hand-in-hand, and would resolve most of my issues with it. Auto-tune, starting pessimistically, but then using something more-optimized after something like gc has detected that it's okay. On pull from an older repository (which I see as happening very frequently, I add remotes much more often than I do a straight "clone"), a warning and an auto-tune to something which would account for the newly-fetched bad data. My only other objection is more wishy-washy and/or lazy: currently a "commit" doesn't need to know anything at all about what it references in order to be considered a valid object, but saying "the time of commit needs to be equal to or greater than the parent commit" means that a tool.. and by "tool" I mean "wretched abuse of cat-file and sed", which is sometimes just faster to throw-together than filter-branch ..needs to be more aware of what it's doing. Yes, it's a horrible abuse, but I was always under the impression that low-level abuse of the system is something which git supports, by virtue of having such a simple model. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html