On 03/19/2012 04:30 PM, Jeff King wrote: > On Mon, Mar 19, 2012 at 03:45:31PM +0100, Andreas Ericsson wrote: > >> On 03/18/2012 10:29 PM, darxus@xxxxxxxxxxxxxxx wrote: >>> I'd like to be able to tell get only that I know the latest commit is bad, >>> and have it go find a good commit, then do the bisecting. Maybe something >>> like the opposite of a binary search, start with the last commit, then >>> second to last, then 4th to last, 8th to last, etc., till it finds a good >>> commit. >>> >> >> Assuming the good commit is the 13'th from HEAD, you'd get the same nr >> of attempts by just specifying a commit 100 revisions in the past and >> doing the already implemented binary search as you would from trying 4 >> commits at a time to get at the good one. >> >> Binary search is a "divide and conquer" algorithm (running in O(log n) >> time), so it handles extremely large datasets very efficiently. > > Yeah. The OP's suggestion is to search backwards, increasing the stride > exponentially. That would end up finding a good commit in O(lg n), > though not with any great accuracy (e.g., for an old bug, you'd end up > considering the whole first half of history as a single stride). Since > bisection would then narrow the result in O(lg n), I think > asymptotically you are not any better off than you would be just > arbitrarily checking the root commit[1], and then starting the bisection > from there. > > But both schemes run into a problem where old commits are often not very > testable. For example, when I am bisecting in git.git, I will run into > something like this: > > 1. Some feature is introduced in v1.7.0. > > 2. A bug in the feature is introduced in v1.7.2. > > 3. Somebody notices and reports the bug in v1.7.5. > > There is no point in testing anything prior to v1.7.0, as your test > cannot succeed before the feature existed. And worse, it will actively > break a bisection. Not "break", as such, but it's naturally left to the user to discover when the feature the bug is in existed. Usually, that leaves a short-ish window. It's sort of beside the point though. Using git as experiment (again), we're looking at less than 30000 revisions and 289 non-rc tags. With only 30k revisions, you'll do *worse* testing 15 tags sequentially than you would by just letting the bisection machinery get on with it and use the full history as base for bisection. Ofcourse, if the project is large (as in "huge tree"), checking out each version will become increasingly expensive the further apart each version is, but the time it takes us to diminish the scope is generally a lot quicker than the time it takes me to remember which tag is next to try and then type the command to check it out, so over-all, I've found it much more convenient to just give a range I know is sufficiently large and then bisecting manually until I get in the ballpark of the right range. Automatic bisection is a different beast, naturally, since writing a test-script that handles all corner cases (feature not added, feature added but different bug found, feature added and right bug found, etc) can be cumbersome, but that doesn't always apply, and darxus didn't mention it. He only mentioned "let's test 4 revisions back in history so I can find the good commit!", and I pointed out that it's ridiculous to do so regardless of whether one has a hunch of where the breakage is or not, since it will (almost) always be 100 times faster to just double the scanned range and let git get on with it, even if it means doing a manual bisect first to find when the feature was introduced and then an automated one to find when the bug came alive. > Pre-v1.7.0 versions will appear buggy, but it is in > fact a _different_ bug than the one you are searching for (the bug is > that the feature isn't there yet). This has been discussed many times on > the list, but the short of it is that you will not get sensible > bisection results if you have multiple bugs (or a bug that comes and > goes throughout history). > > So bisect really needs some input from the user to find a sensible > boundary. And finding that boundary (if the user doesn't already know > it) is generally a manual thing. Because it is usually easy for a human > to recognize that the failure mode for points (1) and points (3) above > are different, but hard to write a script that correctly tests for it. > > IOW, my procedure for a bug like the above is usually to walk backwards > along major tagged versions, manually interpreting the results. When I > try v1.6.0 and my test blows up (because the feature isn't implemented), > I recognize it, dig a little with "git log" to find where it was > implemented, and only then write a script for automated bisection. > That means you've tested 81 tags (discarding rc tags between 1.7.6 and 1.6.0). Compared to binary search, that would correspond to a history holding 1208925819614629174706176 revisions. Discarding maint releases, we're down to 14 tags, and you gain (at most) one search on bisection at the expense of more typing. Truly a toss-up. What I was getting at is that trying to be more efficient than O(log n) is hard and usually requires a really good educated guess to succeed. Picking a random number of jumps to go backwards certainly isn't the right way to do it, and especially since the problems you mentioned (feature missing) will still exist with such a solution. I managed to use a lot of text to get to that final paragraph. Sorry. -- Andreas Ericsson andreas.ericsson@xxxxxx OP5 AB www.op5.se Tel: +46 8-230225 Fax: +46 8-230231 Considering the successes of the wars on alcohol, poverty, drugs and terror, I think we should give some serious thought to declaring war on peace. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html