On 2008-04-09 10:07:25 +0200, Matthieu Moy wrote: > Karl Hasselström <kha@xxxxxxxxxxx> writes: > > > Adding parallelism to a binary search scales very badly -- I'd say > > about logarithmically, but I haven't thought hard about it. If > > it's possible to use the extra cores to speed up the build+test > > cycle, that's vastly preferable. > > Probably logarithmically with the number of cores. But for > reasonable machines, this number is relatively low, so the log is > not so costly. For a binary search, using just 2 cores, you can try > the next in the list in case of a "git bisect good" for example, and > if the hypothesis is true, you've just gained a factor 2 (assuming > it happens 50% of times, that should be a 50% speedup). Similarly, > you should get a factor 2 with 3 cores. Yeah. But to get a factor 3, you need 7 cores; and for 4, you need 15. It goes downhill from there. If your build+test cycle is parallelizable at all, I don't think you'll find those numbers hard to beat. (There's also the fact that testing several revisions at once assumes that the whole build+test cycle is automated, or at least most of it. Otherwise you need more people as well as more cores.) -- Karl Hasselström, kha@xxxxxxxxxxx www.treskal.com/kalle -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html