This feature was requested 8 years ago and briefly discussed: https://public-inbox.org/git/20120318212957.GS1219@xxxxxxxxxxxxxxx/ TL;DR Before doing git bisect, I want to use exponential search to automatically find a good commit, in logarithmic time. Scenario * I have a bug in HEAD. * I strongly suspect that it was introduced some time ago, but I don't know when exactly. * I have an automated test that will find the bug if the test can run properly. * Most of the commits in the repository are not testable, i.e. the test doesn't run properly. (E.g. because the feature it tests wasn't introduced yet, refactoring, etc.) * I have no idea what a good commit might be, because I don't know when the first /testable/ good commit is. This sounds like a standard application for git bisect. No matter how big the repo, with binary search we would expect to find the first bad commit in logarithmic time. Failed attempt The zeroth idea might be trying to find the good commit by hand, by reading changelogs, by trying some commits, whatever. In some situations, this is not feasible. In fact, such situations occur frequently for me, for example for undocumented features, unversioned rolling releases, incidental complexity leading to older commits not being testable, etc. The first idea that comes to mind - and it was recommended 8 years agos, and I've tried it a few times already - is to simply mark the root commit as good. (Now, there might be several roots, but that's a puzzle you typically only have to figure out once per repo.) This sounds great in theory because binary search should get through the good old commits in logarithmic time. The problem with this approach is that if most older commits are untestable, I have to git bisect skip them. This basically kills the logarithmic performance, because bisect skip doesn't do binary search, but something rather random. Just yesterday I killed a bisect search that took hours because it kept skipping and didn't find actual good commits. You might say that instead of skipping old commits, one should mark them as good. That's problematic because then I might accidentally mark a commit as good that was already untestable bad. Given that bisect has no undo functionality, that can quickly mess up my search. Distinguishing untestable good from untestable bad is really hard automatically. I shouldn't have to do that. Long story short: Going from the root commit typically isn't feasible. I've tried it. Proposal: Exponential search Instead of going from the root commit, what I do manually before starting git bisect is this: git checkout HEAD~10 ./test.sh # Says: "Bug is present" git checkout HEAD~20 ./test.sh # Says: "Bug is still present" git checkout HEAD~40 ./test.sh # Says: "Bug is still present" [...] # Proceed exponentially git checkout HEAD~640 ./test.sh # Says: "Bug is GONE!" git bisect good This technique is known as https://en.wikipedia.org/wiki/Exponential_search, and it works very well in practice. I find a good commit long before I enter the "untestable good" region. But it's tedious to do this manually. In this example, I needed to run the script 8 times manually, but of course it can be more often, and compiling and running the test may take time. This is ok for a one-off search, but it's tedious for regular usages. Yes, I could wrap this up in a shell script, but I guess there are caveats that I didn't think of when the history isn't linear. Maybe someone even already has, and I'm unaware of that. But it feels like this could be a proper git bisect feature, and a very useful one.