On Tue, 2010-01-12 at 23:26 +0000, Adam Williamson wrote: > On Tue, 2010-01-12 at 18:32 +0800, Li Ming wrote: > > > Yes, currently,the cases is not enough for a new page,but if you can > > add/extend more cases,I think it worth a new page. And the > > Well, it's a set of tests to validate the release, hence it has tests to > check all existing release criteria. We can't really just invent new > tests out of thin air if they're not things we actually need to test to > validate the release :) > > > classification of test results is different:the existing installation > > matrix is i386,x86_64, but your cases are different session. > > These tests are not likely to vary according to architecture, in my > opinion. > > > Putting > > them together seems a bit incompatible? > > Not necessarily, we can have multiple tables on one page. > > > and also the classification of > > criteria is different. What will determine the method by which we store the results is the frequency each test run happens. Presently, install test runs are on the schedule at specific dates [1] during the release. If we plan to also include additional test plans during each of these test runs, applicable templates can be pulled into the test run wiki page for that milestone. * 3. Pre-Alpha Rawhide Acceptance Test Plan #1 Thu 2010-01-21 * 9. Pre-Alpha Rawhide Acceptance Test Plan #2 Thu 2010-01-28 * 10. Pre-Alpha Rawhide Acceptance Test Plan #3 Thu 2010-02-04 * 12. Test Alpha 'Test Compose' (boot media testing) Thu 2010-02-11 * 14. Test Alpha Candidate Thu 2010-02-18 * 29. Pre-Beta Rawhide Acceptance Test Plan Wed 2010-03-10 * 30. Test Beta 'Test Compose' (boot media testing) Thu 2010-03-18 * 32. Test Beta Candidate Thu 2010-03-25 Thu 2010-04-01 * 39. Pre-RC Rawhide Install Test Plan Wed 2010-04-14 * 46. Test 'Final' Test Compose (boot media testing) Thu 2010-04-22 * 49. Test 'Final' RC Thu 2010-04-29 My suggestion, let's go with different wiki "templates" for each focus area (desktop, install, other). When the time comes to create a test run page, each of the applicable focus areas will be substituted [2] into that single page. Each test run wiki would consist of ... {{subst:Fedora 13 Test Run Template}} {{subst:Fedora 13 Milestone Checklist Template}} {{subst:Fedora 13 Install Results Template}} {{subst:Fedora 13 Desktop Results Template}} <insert additional templates here> > In fact, the classification of the installation tests should now be > adjusted to this system - classifying tests by release stage rather than > the arbitrary 'tier' concept - since we now have proper per-release > release criteria. I hear you're point, but arbitrary isn't the most accurate way to describe the years of built up test knowledge that contributed to the current method for prioritizing install testing [3]. > > So I suggest create a new matrix and add more > > cases to it in the future I have a ticket assigned to address that point (see https://fedorahosted.org/fedora-qa/ticket/35). I'm still wrestling with a decent way to visualize this change, and implement it in a repeatable manner in the wiki. I've got some thoughts, just need to put pen to paper so folks can review. Thanks, James [1] http://poelstra.fedorapeople.org/schedules/f-13/f-13-quality-tasks.html [2] http://en.wikipedia.org/wiki/Help:Substitution [3] https://fedoraproject.org/wiki/QA:Fedora_13_Install_Test_Plan#Test_Priority
Attachment:
signature.asc
Description: This is a digitally signed message part
-- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test