On 02/13/2012 09:30 PM, Bruno Wolff III wrote:
Note that statistics are still gathered and that future changes might depend on whether or not proventesters do a better job than average of correctly tagging builds as good or bad.
Probably stating the obvious, and I am new around here, but the biggest challenge I see is that testing is not well defined. Certainly for the core items standard regressions or checklists of what items should be validated etc do not seem to be present [ or at least i can't find any ]. This naturally leads to inconsistent approaches to testing from tester to tester.
There are a lot of packages, and likely a lack of staffing/volunteers to develop and maintain testplans. However as in most commercial release management having these things would help ensure each tester validated things in a similar fashion and ensure better release quality.
May I ask how many "proventesters" there are ballpark -vs- how many approximate testers of standard status participate at any given time ?
-- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test