On Fri, 26 Jul 2013 11:51:00 +0200 Daniel Vetter <daniel@xxxxxxxx> wrote: > HI all, > > So Ben&I had a bit a private discussion and one thing I've explained a bit > more in detail is what kind of review I'm doing as maintainer. I've > figured this is generally useful. We've also discussed a bit that for > developers without their own lab it would be nice if QA could test random > branches on their set of machines. But imo that'll take quite a while, > there's lots of other stuff to improve in QA land first. Anyway, here's > it: > > Now an explanation for why this freaked me out, which is essentially an > explanation of what I do when I do maintainer reviews: > > Probably the most important question I ask myself when reading a patch is > "if a regression would bisect to this, and the bisect is the only useful > piece of evidence, would I stand a chance to understand it?". Your patch > is big, has the appearance of doing a few unrelated things and could very > well hide a bug which would take me an awful lot of time to spot. So imo > the answer for your patch is a clear "no". This is definitely a good point. Big patches are both hard to review and hard to debug, so should be kept as simple as possible (but no simpler!). > I've merged a few such patches in the past where I've had a similar hunch > and regretted it almost always. I've also sometimes split-up the patch > while applying, but that approach doesn't scale any more with our rather > big team. > > The second thing I try to figure out is whether the patch author is indeed > the local expert on the topic at hand now. With our team size and patch > flow I don't stand a chance if I try to understand everything to the last > detail. Instead I try to assess this through the proxy of convincing > myself the the patch submitter understands stuff much better than I do. I > tend to check that by asking random questions, proposing alternative > approaches and also by rating code/patch clarity. The obj_set_color > double-loop very much gave me the impression that you didn't have a clear > idea about how exactly this should work, so that hunk trigger this > maintainer hunch. This is the part I think is unfair (see below) when proposed alternatives aren't clearly defined. > I admit that this is all rather fluffy and very much an inexact science, > but it's the only tools I have as a maintainer. The alternative of doing > shit myself or checking everything myself in-depth just doesnt scale. I'm glad you brought this up, but I see a contradiction here: if you're just asking random questions to convince yourself the author knows what they're doing, but simultaneously you're not checking everything yourself in-depth, you'll have no way to know whether your questions are being dealt with properly. I think the way out of that contradiction is to trust reviewers, especially in specific areas. There's a downside in that the design will be a little less coherent (i.e. matching the vision of a single person), but as you said, that doesn't scale. So I'd suggest a couple of rules to help: 1) every patch gets at least two reviewed-bys 2) one of those reviewed-bys should be from a domain expert, e.g.: DP - Todd, Jani GEM - Chris, Daniel $PLATFORM - $PLATFORM owner HDMI - Paulo PSR/FBC - Rodrigo/Shobhit * - Daniel (you get to be a wildcard) etc. 3) reviews aren't allowed to contain solely bikeshed/codingstyle change requests, if there's nothing substantial merge shouldn't be blocked (modulo egregious violations like Hungarian notation) 4) review comments should be concrete and actionable, and ideally not leave the author hanging with hints about problems the reviewer has spotted, leaving the author looking for easter eggs For the most part I think we adhere to this, though reviews from the domain experts are done more on an ad-hoc basis these days... Thoughts? -- Jesse Barnes, Intel Open Source Technology Center _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx