Just a few extra thoughts on the subject ... On Fri, 2010-12-03 at 07:30 -0800, Adam Williamson wrote: > On Fri, 2010-12-03 at 16:06 +0100, xcieja wrote: > > Hi, > > yes, you are right there are tests, but in my opinion they are in few > > different places under different categories. > > That wasn't what I meant: I meant we already use the Wiki for the > purposes you identified as an advantage of a TCMS (listing the tests > that need to be performed in relation to some specific process, and > whether they have already been performed, by whom, and with which > result). > > > I think we could organise them better -i.e create test category and put > > all of them instead of many places. > > We sure could, but we don't necessarily need a TCMS to do this. :) Note > that we do try to keep them all within one Wiki namespace and we do use > Wiki categories to organize some test cases. > > > Moreover, i just have taken a look briefly and i see there are round 100 > > test cases in total (please correct me if i am wrong).I think that for > > such project/system it is not enough at all. > > > > We have big community, let`s assume everyone from QA create one test, we > > will have quite huge number of tests and obviously more faults detected > > before main release, less corrections after=better stability,usability-> > > better overall opinion. > > Sure, we can always do with more test cases. More test cases/plans would certainly change the conversation a bit. I think we all want to increase the value that the Fedora QA team can offer to the project. One way to increase our value is by improving our test coverage by way of test documentation (procedures, plans and cases). There are plenty of other ways ... but we can save those for other threads. I've always been hesitant to add tests for the sake of adding tests. Test plans/cases are just like software. If the tests aren't addressing a priority issue, they won't be used as much, and like unused software, will suffer from bit rot. The best test cases/plans are the ones frequently used, referenced and have maintainer buy-in. Meaning, if the tests fail, the maintainer cares. I want to grow the library of tests we maintain and run, but hopefully grow in a manner and pace that we, as a community, can sustain. With the test plans that Adam points to, I'm pretty confident in our ability to develop, discuss/debate and execute desktop and installation tests as a community. We've ironed out the kinks in the workflow, increased community engagement and developed good test plans as a result. My impression is we are ready for additional test areas. That's what's exciting to me about the proventesters effort. As you can tell from recent (and old) devel@ list threads, testing proposed updates is important work that's needed, requested by package maintainers and well under-documentated. I don't worry as much that tests written for frequent proventester use will go stale given it's been a long-standing exposure in the project. Also, given the huge number of components in Fedora, there is room for just about every contributor to participate and carve out a niche. But which tests do we prioritize first, where do we write the tests, where to review+discuss them, how to run them etc... (more on this later). For me, these are two separate (but related) efforts. TCMS is tool designed to address specific workflow/tracking needs. We also need to determine how to best to sustainably expand the test coverage we can offer to the project. We have a wiki-based "TCMS" now. It has met our needs for the current set of organized test efforts. It's not perfect, but the return on the investment has been huge. The questions I'd like to see answered in ticket#152, is (1) whether the wiki can continue to scale as our test management needs grow, and (2) what aspects of our wiki-based TCMS are good/bad? Thanks, James
Attachment:
signature.asc
Description: This is a digitally signed message part
-- test mailing list test@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test