> -----Original Message----- > From: fedora-devel-list-admin@xxxxxxxxxx [mailto:fedora-devel-list- > admin@xxxxxxxxxx] On Behalf Of Toshio > Sent: Friday, February 27, 2004 12:02 AM > To: fedora-devel-list@xxxxxxxxxx > Subject: Fedora.us QA (was: Re: Prelink success story :)) > > I think the need to get feedback _on_a_QA_ is the heart of the problem > right now. QA'ers need mentoring to develop their skills. Jef, how > would you feel if some bug day we had experienced QA'ers team with new > QA'ers to nitpick packages? This could be a good first or second time > QA experience: irc with a more experienced QA'er and pick apart the good > and bad of a package. Or we could assemble a team of experienced QA'ers > to do second reviews once a new user did the first one. This could be > useful for a more advanced QA'er. Kinda have a short apprenticeship > followed by a test what you know phase. > I like this idea in principal. I'm not sure if expecting people to meet up on IRC is likely to happen or not, but having an experienced reviewer mentoring the new reviewers work would help to ensure that it's done right, and provide feedback to the new reviewer so that they can feel like something is happening and they're making a valuable contribution. Obviously they'll learn a lot from it too. > > What I would like to see (for both QA and packager) is an RPM Best > Practices Handbook. Accumulated wisdom about how to do things well > organized by use cases with in depth justifications available as side > passages for those who are curious why %{buildroot} and > ${RPM_BUILD_ROOT} have such strong proponents on each side :-) > This is a good idea too, although I think it would require a central collection authority to really get up to speed. Maybe redhat will hire someone to do this sometime? Perhaps the wiki could be coerced into holding some of this info in the form of a vastly glorified QA checklist? > > I wanted to write a fedora-qatemplate script that could generate a > template complete with checklist. But I still haven't figured out how > the template should be structured to address all the shortcomings I > listed above. (My best thought so far has been to abandon the CLI and > instead have an interactive GUI script with checklist that outputs > certain security related boilerplate and whatever checklist items fail.) A tool to help generate QA reviews certainly wouldn't hurt, although it seems like the showstoppers could be covered in a list short enough that it could be easily cut'n'pasted. > > As QA stands (with all volunteer QA'ers) we can't depend on a newbie > doing first review and a seasoned vet doing the second review to figure > out what the newbie left out. Besides, the seasoned vet is likely to do > the same things the newbie did to make sure those things really do check > out so having the newbie _just_ run through a checklist isn't that > useful. What is useful is if the new QA'er has the sense of > responsibility to run through the checklist and the curiosity to test > whatever looks broken, fragile, or otherwise could might need > improvement. > What's to prevent a seasoned vet from reviewing newbie reviewed packages? It seems to me that that's the idea behind the 2 reviews policy. Somewhere I read that those "trusted" by the project were given authority to approve QA on their own anyway. Maybe that should be amended so that they are the only ones to actually APPROVE a project, so then all reviews NOT made by a "trusted member" would need a second approval before going live. I suppose this depends on how open the project is to granting people "committer privileges" or "trusted status". > > > > The non-showstoppers should be on a second list of "stuff to watch for". > > This could be far more detailed than the actual QA checklist, and as a > > newbie gets deeper into packaging lore, they would likely begin filling > > out more of it. > > > In my mind I divide things into showstopper and stylistic. With the > possible exception of the present wording of RPM_BUILD_ROOT, I think > everything on the list is a showstopper. And as Michael lists, there > are many more that aren't on the list (because they don't occur often > enough? Because there isn't consensus yet? Because no one wanted to > turn the Checklist into a pre-flight manual with seventy pages > sixty-nine of which don't apply to this particular package?) I like that distinction. However, I don't think unowned directories, incorrect file permissions or lack of macros in paths qualify as show stoppers. Why is fedora.us / extras held to a higher standard of quality than redhat themselves? There is plenty of time to improve packages with time, and getting stuck on perfection before passing QA is going to prevent anyone from posting or reviewing packages. As time goes by, we'll have automated tools to check for this stuff and be able to improve it incrementally > > > 1. Does the package follow the Fedora Package Naming Guidelines? > > This is pretty darn complicated for a newbie QA'er. They should be > > allowed to opt out. > My problem with opting out was stated above: what if two QA'ers review > the package and both opt-out? > OK. Point well made. However, if there's a checklist, you'll know if somebody opted out. Changing the name is painful, so it should be right the first time. A newbie can probably be expected to devote 15 minutes to figuring out the naming conventions. However, if we aim for allowing someone to contribute meaningfully in 1 hour, for instance, that leaves 45 minutes for them to get up to speed on everything else. I believe that if we get the time required to get up to speed down under an hour, there will be a VAST influx of QA information and new blood to the project. Just think of it: "How to help the Fedora Project in 1 hour!" > > 3. Are the pre- and post(un)install scripts correct? > > ...Again, concrete steps would be better. ... > I think this one exceeds the step-by-step. Way too many things could > happen in the scriptlets to say definitively when it's done (more > entries for a best practices book, OTOH...) > Personally I think this should be included under "does it install". QA needs to check if it installs and seems to work. Expecting the QA'er to analyze the scripts in detail is prohibitive and nebulous. If there's a problem, somebody will notice it and file a bug report eventually. It's important to check the scripts for grossly abusive things like rm -rf /, but beyond that the test should be no more complex than whether or not it appears to work on the target distro. This is the packagers job. > > 5. Are there no missing BuildRequires? > > > I agree that this can be basically impossible as it now stands. I first > tried fedora-rmdevelrpms but that made my machine close to unusable as a > development environment (to QA or program.. Hmmmm...) > A fully functional and preconfigured mach build included in fedora.us would essentially solve this particular problem. Then this step can be called "Does it build" on the QA checklist and be left at that. > > 1. Relax on the whole GPG thing. .... > > When they [a new QA'er] first start, their input is going > > to be suspect anyway, so why slow down the process > > > I might be a little attached to public key cryptography, but I think > it's an important added protection. Bugzilla accounts can be traced to > creation dates and email addresses. GPG keys add third party > signatures. If crackers ever start trying to package compromised files > and get them into Extras, we will have more information to scan on to > try to figure out if we need to do more background checking on the > packager/QA'ers of a package. OTOH, that might be completely emotional > and the additional security might not be that great. > I don't think there's anything wrong with GPG, and I agree it adds a great measure of security. However, I do think stressing its importance to new QA'ers is time wasted. Once they get more involved they'll see the value, and want to figure it out on their own. Again, think of reducing the barrier for entry. > [snippage] > > I think it's imperative that packages make it through the QA process. It > > doesn't do any good if packages never make it into the repo. > > Speaking as someone who responds when someone points out a packaging > error but has never had a package make it through the queue: Sometimes > you've got to accept it. I only package things I need. If it doesn't > make it out the door, then not enough other people found it equally > useful. *shrug* > > That said, the most frustrating packages are not the ones that sit there > unreviewed, but the ones that sit there half reviewed (or with an older > version reviewed but not a newer one.) It implies interest, but not > enough among active QA'ers to make the push over the hump. I disagree. I don't think the problem is lack of interest, but a too high barrier for entry. QA as it stands is a nebulous beast. There are a few guru's who know what is expected by fedora.us, and if they take an interest in your package it will get QA'd. Otherwise, the other 5000 users such as myself might make it as far as the QA list and download your package, but are HIGHLY unlikely to invest the 8 hours I did just to see possible failure. This process has got to get streamlined enough that we can get critical mass. I'm not positive of this, but it seems as if the number of submitted and un-QA'd packages is increasing, not decreasing. That's a sign that the QA system is not doing its job. QA is never going to be attractive to "super-star coders", and I'm of the opinion that most of those doing it right now fall into that category. These are the same type of people who manage to maintain repositories of 300+ packages single-handedly. These type of people generally don't like checklists, and see them as the oversimplification which they are. It's GREAT to have these kind of people involved with the project, but for fedora / fedora.us / extra's to draw on it's potential it needs to attract input from people who don't make packages fulltime, but instead want to use them. Instead, QA is going to be attractive to some poor sysadmin who needs amavis (or one of the other 300+ packages sitting in the QA queue), and wants to know it will work well with fedora, and that it will upgrade cleanly in the future. The same guy who right now points his yum.conf at atrpms, dagrpms and freshrpms because THEY have packages that he can download NOW, and usually work. He may not know much about rpm, but if he's got a simple how to start guide and a checklist to fill out he'll be happy to spend an hour or two auditing the spec file to the best of his ability (on company time) so that he can stop maintaining his own RPM or worrying about future incompatibilities in other peoples repo's. The fact that fedora.us is a single repository, with multiple people responsible for it will be enough of a reason for him to devote a little bit of his time to QA rather than using the other repo's if it's straightforward and he can expect results. In addition, I'm of the opinion that a lot of the independent packagers would start submitting packages to fedora.us / extra's IF they felt they would get approved in a timely manner. Right now, QA is WAY too complicated and fraught with pseudo policies that just turn people away and confuse matters. Just my $0.02. --erik