On Mon, Jul 05, 2010 at 08:27:23AM -0400, Jeff King wrote: > As you probably guessed from the specificity of the number, I wrote a > short program to actually traverse and find the worst skew. It takes > about 5 seconds to run (unsurprisingly, since it is doing the same full > traversal that we end up doing in the above numbers). So we could > "autoskew" by setting up the configuration on clone, and then > periodically updating it as part of "git gc". > > That is perhaps over-engineering (and would add a few seconds to a > clone), but I like that it would Just Work without the user doing > anything. As time progresses, the clock skew breakage should be less likely to be hit by a typical developer, right? That is, unless you are specifically referencing one of the commits which were skewed, two years from now, the chances of someone (who isn't doing code archeology) of getting hit by a problem should be small, right? This seems to be definitely the case with "git tag --contains"; would it be true for git name-rev and the other places that depend on (roughly) increasing commit times? If so, I could imagine the automagic scheme choosing a default that only finds the worst skew in the past N months. This would speed up things up for users who are using repositories that have skews in the distant past, at the cost of introducing potentially confusuing edge cases for people doing code archeology. I'm not sure this is a good tradeoff, but given in practice how rarely most developers go back in time more than say, 12-24 months, maybe it's worth doing. What do you think? - Ted -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html