The nice thing with kicking off a process discussion before disappearing into vacation is that I've had a long time to come up with some well-sharpened opinions. And what better way to start than with a good old-fashioned flamewar ;-) On Tue, Jul 30, 2013 at 09:50:21AM +1000, Dave Airlie wrote: > >> > I do agree that QA is really important for a fastpaced process, but > >> > it's also not the only peace to get something in. Review (both of the > >> > patch itself but also of the test coverage) catches a lot of issues, > >> > and in many cases not the same ones as QA would. Especially if the > >> > testcoverage of a new feature is less than stellar, which imo is still > >> > the case for gem due to the tons of finickle cornercases. > >> > >> Just my 2c worth on this topic, since I like the current process, and > >> I believe making it too formal is probably going to make things suck > >> too much. > >> > >> I'd rather Daniel was slowing you guys down up front more, I don't > >> give a crap about Intel project management or personal manager relying > >> on getting features merged when, I do care that you engineers when you > >> merge something generally get transferred 100% onto something else and > >> don't react strongly enough to issues on older code you have created > >> that either have lain dormant since patches merged or are regressions > >> since patches merged. So I believe the slowing down of merging > >> features gives a better chance of QA or other random devs of finding > >> the misc regressions while you are still focused on the code and > >> hitting the long term bugs that you guys rarely get resourced to fix > >> unless I threaten to stop pulling stuff. > >> > >> So whatever Daniel says goes as far as I'm concerned, if I even > >> suspect he's taken some internal Intel pressure to merge some feature, > >> I'm going to stop pulling from him faster than I stopped pulling from > >> the previous maintainers :-), so yeah engineers should be prepared to > >> backup what they post even if Daniel is wrong, but on the other hand > >> they need to demonstrate they understand the code they are pushing and > >> sometimes with ppgtt and contexts I'm not sure anyone really > >> understands how the hw works let alone the sw :-P > > > > Some of this is driven by me, because I have one main goal in mind in > > getting our code upstream: I want high quality kernel support for our > > products upstream and released, in an official Linus release, before the > > product ships. That gives OSVs and other downstream consumers of the > > code a chance to get the bits and be ready when products start rolling > > out. Imo the "unpredictable upstream" vs. "high quality kernel support in upstream" is a false dichotomy. Afaics the "unpredictability" is _because_ I am not willing to compromise on decent quality. I still claim that upstreaming is a fairly predictable thing (whithin some bounds of how well some tasks can be estimated up-front without doing some research or prototyping), and the blocker here is our mediocre project tracking. I've thought a bit about this (and read a few provoking books about the matter) over vacation and I fear I get to demonstrate this only by running the estimation show myself a bit. But atm I'm by far not frustrated enough yet with the current state of affairs to sign up for that - still chewing on that maintainer thing ;-) > Your main goal is however different than mine, my main goal is to > not regress the code that is already upstream and have bugs in it > fixed. Slowing down new platform merges seems to do that a lot > better than merging stuff :-) > > I realise you guys pay lip service to my goals at times, but I often > get the feeling that you'd rather merge HSW support and run away > to the next platform than spend a lot of time fixing reported bugs in > Ironlake/Sandybridge/Ivybridge *cough RC6 after suspend/resume*. > > It would be nice to be proven wrong once in a while where someone is > actually assigned a bug fix in preference to adding new features for new > platforms. Well, that team is 50% Chris&me with other people (many from the community ...) rounding things off. That is quite a bit better than a year ago (and yep, we blow up stuff, too) but not great. And it's imo also true that Intel as a company doesn't care one bit once the hw is shipped. My approach here has been to be a royal jerk about test coverage for new features and blocking stuff if a regression isn't tackled in time. People scream all around, but it seems to work and we're imo getting to a "farly decent regression handling" point. I also try to push for enabling features across platforms (if the hw should work the same way) in the name of increased test coverage. That one seems to be less effective (e.g. fbc for hsw only ...). Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx