Re: Maintainer-review fluff (was: Re: [PATCH 01/12] drm/i915: plumb VM into object operations)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 26, 2013 at 09:59:42AM -0700, Jesse Barnes wrote:
> On Fri, 26 Jul 2013 11:51:00 +0200
> Daniel Vetter <daniel@xxxxxxxx> wrote:
> 
> > HI all,
> > 
> > So Ben&I had a bit a private discussion and one thing I've explained a bit
> > more in detail is what kind of review I'm doing as maintainer. I've
> > figured this is generally useful. We've also discussed a bit that for
> > developers without their own lab it would be nice if QA could test random
> > branches on their set of machines. But imo that'll take quite a while,
> > there's lots of other stuff to improve in QA land first. Anyway, here's
> > it:
> > 
> > Now an explanation for why this freaked me out, which is essentially an
> > explanation of what I do when I do maintainer reviews:
> > 
> > Probably the most important question I ask myself when reading a patch is
> > "if a regression would bisect to this, and the bisect is the only useful
> > piece of evidence, would I stand a chance to understand it?".  Your patch
> > is big, has the appearance of doing a few unrelated things and could very
> > well hide a bug which would take me an awful lot of time to spot. So imo
> > the answer for your patch is a clear "no".
> 
> This is definitely a good point.  Big patches are both hard to review
> and hard to debug, so should be kept as simple as possible (but no
> simpler!).
> 
> > I've merged a few such patches in the past  where I've had a similar hunch
> > and regretted it almost always. I've also sometimes split-up the patch
> > while applying, but that approach doesn't scale any more with our rather
> > big team.
> > 
> > The second thing I try to figure out is whether the patch author is indeed
> > the local expert on the topic at hand now. With our team size and patch
> > flow I don't stand a chance if I try to understand everything to the last
> > detail. Instead I try to assess this through the proxy of convincing
> > myself the the patch submitter understands stuff much better than I do. I
> > tend to check that by asking random questions, proposing alternative
> > approaches and also by rating code/patch clarity. The obj_set_color
> > double-loop very much gave me the impression that you didn't have a clear
> > idea about how exactly this should work, so that  hunk trigger this
> > maintainer hunch.
> 
> This is the part I think is unfair (see below) when proposed
> alternatives aren't clearly defined.

Ben split up the patches meanwhile and imo they now look great (so fully
address the first concern). I've read through them this morning and dumped
a few (imo actionable) quick comments on irc. For the example here my
request is to squash a double-loop over vma lists (which will also rip out
a function call indirection as a bonus).

> > I admit that this is all rather fluffy and very much an inexact science,
> > but it's the only tools I have as a maintainer. The alternative of doing
> > shit myself or checking everything myself in-depth just doesnt scale.
> 
> I'm glad you brought this up, but I see a contradiction here:  if
> you're just asking random questions to convince yourself the author
> knows what they're doing, but simultaneously you're not checking
> everything yourself in-depth, you'll have no way to know whether your
> questions are being dealt with properly.

Well if the reply is unsure or inconstistent then I tend to dig in. E.g.
with Paulo's pc8+ stuff I've asked a few questions about interactions with
gmbus/edid reading/gem execbuf and he replied that he doesn't know. His
2nd patch version was still a bit thin on details in that area, so I've
sat down read through stuff and made a concrete&actionable list of
corner-cases I think we should exercise.

> I think the way out of that contradiction is to trust reviewers,
> especially in specific areas.
 
Imo I've already started with that, there's lots of patches where I only
do a very cursory read when merging since I trust $AUTHOR and $REVIEWER
to get it right.

> There's a downside in that the design will be a little less coherent
> (i.e. matching the vision of a single person), but as you said, that
> doesn't scale.

I think overall we can still achieve good consistency in the design, so
that's a part where I try to chip in. But with a larger team it's clear
that consistency in little details will fizzle out more, otoh doing such
cleanups after big reworks (heck I've been rather inconstinent in all the
refactoring in the modeset code myself) sounds like good material to drag
newbies into our codebase.
 
> So I'd suggest a couple of rules to help:
>   1) every patch gets at least two reviewed-bys

We have a hard time doing our current review load in a timely manner
already, I don't expect this to scale if we do it formally. But ...

>   2) one of those reviewed-bys should be from a domain expert, e.g.:
>      DP - Todd, Jani
>      GEM - Chris, Daniel
>      $PLATFORM - $PLATFORM owner
>      HDMI - Paulo
>      PSR/FBC - Rodrigo/Shobhit
>      * - Daniel (you get to be a wildcard)
>      etc.

... this is something that I've started to take into account already. E.g.
when I ask someone less experienced for a given topic to do a
fish-out-of-water review I'll also poke domain experts to ack it. And if
there's a concern it obviously overrules an r-b tag from someone else.

>   3) reviews aren't allowed to contain solely bikeshed/codingstyle
>      change requests, if there's nothing substantial merge shouldn't be
>      blocked (modulo egregious violations like Hungarian notation)

I think we're doing fairly well. Occasionally I rant around review myself,
but often that's just the schlep of digging the patch out again and
refining it - most often the reviewer is right, which obviously makes it
worse ;-)

We have a few cases where discussions tend to loop forever. Sometimes I
step in but often I feel like I shouldn't be the one to make the call,
e.g. the audio discussions around the hsw power well drag out often, but
imo that's a topic where Paulo should make the calls.

Occasionally though I block a patch on bikeshed topics simply because I
think the improved consistency is worth it. One example is the gen checks
so that our code matches 0-based C array semantics and our usual writing
style of using genX+ and pre-genX to be inclusive/exclusive respectively.

>   4) review comments should be concrete and actionable, and ideally not
>      leave the author hanging with hints about problems the reviewer
>      has spotted, leaving the author looking for easter eggs

Where's the fun in that? I think the right way to look at easter egg
hunting is that the clear&actionable task from the reviewer is to go
easter egg hunting ;-)

More seriously though asking "what happens if?" questions is an important
part of review imo, and sometimes those tend to be an easter egg hunt for
both reviewer and patch author."

> For the most part I think we adhere to this, though reviews from the
> domain experts are done more on an ad-hoc basis these days...
> 
> Thoughts?

Generally I think our overall process is
a) a mess (as in not really formalized much) and
b) works surprisingly well.

So I think fine-tuning of individual parts and having an occasional
process discussion should be good enough to keep going.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux