On 11/3/07, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > > On Fri, 2 Nov 2007, Linus Torvalds wrote: > > > > The bad news is that it doesn't work well in this simplistic form, because > > there is a O(n**2) behaviour when replays *do* happen, ie we end up having > > replays within replays [..] > > Gaah. the more I look at this, the more I think the topo sort should be > done at the visualization side. > > It's really quite cheap to do the topo sort, *and* it's really quite cheap > to do the tests that trigger the topo sort, but what's expensive is to > re-output all the data again! > > The silly thing is, I think I've come up with an "almost optimal" > solution, but it's so ugly that I'm a bit ashamed of it. > > That almost optimal solution is simply: > - get the first <n> (say: 100) commits, and topo-sort just them. Feed it > to the visualizer. > - the visualizer will now have enough to work with in order to show the > starting screen and set the cursor to the hourglass or whatever the > "wait for it" thing is. > - get the rest of the commits at our normal leisurely pace (whether it > is one second of 17). > - output the total number of commits (so that the visualizer can re-size > the slider and/or allocate some big array just once), topo-sort it all, > and output the full thing. > > It's disgusting. But it avoids the unnecessary data transfer - except for > just the first 100 commits that get sent twice. And it gets what *I* look > for, namely that "immediate" feel to the initial pop-up of the history. > It's not disgusting is human perception oriented ! All this stuff is not needed to get the sha faster, but to let think the user that are faster. It's for strictly human consumption, so I would say your "ugly" solution is the best for me. A bunch of revisions, just to let user eyes to re-focus on new stuff (and some hundredths of milliseconds are already elapsed after this) while in the background the real, shadowed, work goes on. It's also easy on the client GUI side, simply discard all and reload as soon _correct_ data arrives. So the new option could became: git log --fast-output 100 500 --topo-order <...whatever...> where git log outputs as soon as it can 100 commits and feeds it to the visualizer. If the _normal_ commits are still not ready after 500 ms are elapsed then git log spits out another 100 commits chunk and so on at 500ms intervals until good commits are ready. Then outputs the full thing. It is very user perception oriented, but hey, so is a GUI! Marco P.S: A little optimization for small repositories would be that git log *waits* at maximum 500ms before to output the first 100 commits chunk, so that in case of small repos (thousands of revisions) or in case of warmed up cache the commits in output are already the good ones, no need for fakes! - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html