Christian Couder wrote: >> >> I think this shows that the "skip ratio" heuristics based on the distance >> in the "goodness scale" space does not help in avoiding commits that are >> close in topological space. There may be cases where the version with >> patch gives fewer rounds especially when the history is very linear, but >> I was mostly interested in the number of commits at least in the >> thousands, which I think is what we should optimize things for, not a toy >> history of linear 100 commits. > > I get the same results as yours, and I think that in these tests cases "git > bisect" was not stuck with having only untestable commits with the > highest "goodness" values. So in these cases the original behavior does > quite well and that's why the updated behavior can't do better. > It's not entirely clear to me that this is any better than simply randomly picking a commit from the list of plausible commits -- in other words, eliminate the commits we can totally rule out, and then just pick a random commit among the list of plausible commits. This is not *quite* as crazy as it sounds; it has the advantage of being an extremely simple algorithm which shouldn't have any pathological behaviours. The average information gain for a randomly picked commit is 1/(2 ln 2) =~ 0.7213 bits, or an increase in the total bisect time by 39% over a pure binary search. -hpa -- H. Peter Anvin, Intel Open Source Technology Center I work for Intel. I don't speak on their behalf. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html