On Wed, 2010-12-01 at 16:55 -0500, Doug Ledford wrote: > The comparison is 100% fair because it points out the fundamental > problem with the current policy: if you don't have a paid staff of > testers to make sure testing is done in a timely fashion, then you have > absolutely no business gating updates on a testing staff that doesn't > exist. It's nice in theory to think we can force testing of updates > prior to their release, but if the testing staff simply isn't there, > then you aren't improving the product, you're just stopping progress. The gating is not on 'a testing staff'. The gating is on *testing*. I want to say again that I'm not particularly wedded to the current policy and I don't mind at all if it changes, but I think we need to be careful of the mindset that says 'we can't enforce any standards in Fedora because it's a volunteer project so we must just accept what people are willing to give us'. Even though packaging in Fedora is a volunteer process, we still have fairly rigorous packaging guidelines and a review process. We don't just accept any package someone turns up and submits. i.e., we're enforcing standards of quality, despite this being an entirely volunteer effort and no-one being compelled to show up and provide packages of a particular quality. The concept of having a policy requiring updates to be tested before they're issued is really no different. I think one point where we've fallen over is that it wasn't sufficiently well discussed / communicated in advance that this testing wasn't just going to 'get done' by some independent group and no-one else would have to worry about it, but would require a lot of people to chip in. In the same way that there isn't some separate independent group that does package reviews, it's just all maintainers chipping in when they can. I think perhaps those who supported and voted for the policy kind of assumed this would happen, and many others weren't actually aware of it. I do think that for update testing to work well going forward we need to engage more groups with it and make it clear it's not something that some separate QA group is just going to do for everyone and no-one has to worry about it. We can get, and already have got, some enthusiastic people to sign up to run updates-testing and provide testing feedback for the packages they use anyway, but the concept of there being a hardcore group of dedicated testers who will go out of their way to install, configure and test software they wouldn't usually use is not one that's likely to fly, I don't think. When software is packaged it's reasonable to expect that someone, somewhere, uses it; if they don't, it probably shouldn't be packaged. We need to find those people and engage them in the testing process, and it seems to me that the maintainers of packages are as well placed as anyone to help find and engage their users in this process. In many cases it's easier than that; a lot of packages are maintained by more than one person. It's not only perfectly okay but more or less *what we want to happen* for co-maintainers to sign up as proven testers and test each others' updates. There's a bunch of people in the anaconda group, for instance; it's perfectly fine for you all to sign up as proven testers and test each other's code. The testing doesn't have to come from some impartial outside body, all we need is a sanity check. I don't really see any reason why *everyone* who's a packager shouldn't also have signed up to be a proven tester by now. I'd like to ask if anyone has a perception that it's a hard process to get involved in, or if they got the impression that they *shouldn't* get engaged in it, or something like that. Maybe we can improve the presentation to make it clear that this really ought to be a very wide-based process. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Fedora Talk: adamwill AT fedoraproject DOT org http://www.happyassassin.net -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel