Re: [Test-Announce] Proven tester status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the stats and information.  

How big is the gap in testing.  

Is there a significant amount of package releases etc walked back because after they passed minimum time in QA and were published it turned out they were broken ?   Ie. percent wise or some other metric or in the absence of that a gut assessment.

Are certain areas in more need of focus than others due to criticality and lack of testing/testers ?  If so what areas are those ?

Wanting to get a handle on things around here so I can understand where I can be most effective in helping out.  I want to look into the automated qa testing via this link https://fedoraproject.org/wiki/AutoQA

Thanks again.



On 02/15/2012 10:16 PM, Adam Williamson wrote:
On Wed, 2012-02-15 at 21:35 -0500, Vincent L. wrote:
On 02/13/2012 09:30 PM, Bruno Wolff III wrote:
Note that statistics are still gathered and that future changes might depend
on whether or not proventesters do a better job than average of correctly
tagging builds as good or bad.
Probably stating the obvious, and I am new around here, but the biggest 
challenge I see is that testing is not well defined.   Certainly for the 
core items standard regressions or checklists of what items should be 
validated etc do not seem to be present [ or at least i can't find any 
].   This naturally leads to inconsistent approaches to testing from 
tester to tester.

There are a lot of packages, and likely a lack of staffing/volunteers to 
develop and maintain testplans.   However as in most commercial release 
management having these things would help ensure each tester validated 
things in a similar fashion and ensure better release quality.
Yes, this is broadly the problem.

We have a system in place that allows you to create a test plan for a
package and have it show up in the update request. See it in action at
https://admin.fedoraproject.org/updates/FEDORA-2012-1766/dracut-016-1.fc17 - note the links to test cases - and details on how to actually set up the test cases to make this work are at https://fedoraproject.org/wiki/QA:SOP_package_test_plan_creation . We don't have test plans for many packages, really because of the resource issue. Jon Stanley did suggest he might work on this as his 'board advocacy' task.

May I ask how many "proventesters" there are ballpark -vs- how many 
approximate testers of standard status participate at any given time ?
We can say with precision how many proven testers there are, because
there's an associated FAS group - there are 90 members of the
'proventesters' group in FAS. Active non-proven testers is a bit harder
to count, but Luke can generate Bodhi statistics. There's one fairly
'famous' set from 2010 here:

https://lists.fedoraproject.org/pipermail/devel/2010-June/137413.html

There's a less famous report from March 2011 here:

http://lmacken.fedorapeople.org/bodhi-metrics-20110330

you can get some numbers from. At the time of the 2011 report it seems
like there was a roughly 1:10 proventester/regular tester ratio for F15
and F14, but it does seem to be slightly unclear.

-- 
test mailing list
test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe:
https://admin.fedoraproject.org/mailman/listinfo/test

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux