2009/3/16 Steven Tweed <orthochronous@xxxxxxxxx>: > On Sun, Mar 15, 2009 at 7:16 PM, Ealdwulf Wuffinga > <ealdwulf@xxxxxxxxxxxxxx> wrote: >> On Fri, Mar 13, 2009 at 3:19 PM, Steven Tweed <orthochronous@xxxxxxxxx> wrote: >> It is not obvious how to perform this algorithm incrementally, because >> of the need to >> marginalise out the fault rate. As I understand it, marginalisation >> has to be done after you >> have incorporated all your information into the model, which means we >> can't use the >> usual bayesian updating. > > I had a look over the weekend, and got a bit sidetracked on one of > your assumptions. You seem to be assuming that the bug is such that > observing a single positive observation of the symptom at a position i > in the linear history _does not_ completely rule out that the guilty > commit occurs after that point. I would have thought the generally > more applicable assumption is that, given that generally you don't > have a bug ridden system where more than one bug causes the same > symptom _within the history of interest_, that a single observation of > the symptom does totally rule out the bug after that point (whilst > intermittency clearly not having observed the bug before that point > doesn't completely rule out the guilty commit being earlier, although > it should increase the liklihood estimate of the bug being later). I think it's reasonable to expect false-positives as well as false-negatives. e.g. you're looking for a commit that slows down the frame rate. But on one of the good commits the hard disk hits a bad sector and takes a bit longer to retrieve data and so you get a false-positive. It's a bit contrived, but I'm sure you can think of better example John -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html