On Tue, Jul 06, 2010 at 04:31:43PM +0100, Will Palmer wrote: > Is it wrong to expect that git perform poorly in the edge-cases (hugely > skewed timestamps), but that it perform /correctly/ in all cases? > > Clearly, marking already-traversed histories was the right thing to do, > and if I read correctly, made a good improvement on its own. But you > seem to have crossed a line at some point between "optimization" and > "potentially giving the wrong answer because it's faster" When "it's faster" is between 100-1000 times faster, I think we have to look at things a bit more closely. That's the difference between a command being usable and not usable. We would be much better off if our tools enforced the fact that committer times were always increasing. If from the beginning, we had introduced checks so that "git commit" refused to create new commits where the committer time was before its parent commit(s), and git-receive-pack refused to accept packs that contained non-monotonically increasing commits or commits that occurred in the future according to its system clock, then these optimizations would be completely valid. But we didn't, and we do have skew in some repositories. So the question is what to do going forward? One solution might be to enforce this moving forward, and to have varying levels of strictness in enforcing this constraint. So for completely new repositories, this becomes a non-brainer. For older repositories, Jeff's idea of having a tunable parameter so that results are correct given a maximum clock skew --- which can be determined --- will allow us to have correctness _and_ performance. - Ted -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html