Re: [Fedora QA] #152: Test Cases Management

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2010-12-03 at 23:33 +0100, Karol CieÅla wrote:
> Hi,
> i agree with both of you ,so that rather than just sending e-mails i
> would like initiate something which could improve the current
> situation.
> 
> So the steps I see now look as follows:
> I stage
> 1) someone could sketch out/outline/describe the functionality of
> Wiki/TCMS we would like to have for the test cases (if possible also
> provide graphical form) -i.e table, page, buttons

Hurry and I discussed this topic yesterday on IRC.  I suspect she'll
take a similar approach, but the general idea will be to first identify
what our QA community needs are when it comes to test
documentation/results.  For example, what works well, what isn't working
well, where do we want to improve when it comes to our current test
management.

> 2) discuss with the whole community such form -pros,cons

As always, I expect the process will be transparent and there will be
plenty of opportunity for anyone interested to get involved and help
move things forward.

> 3) reach out to people maintaining the Wiki and ask them if they can
> create such form wit help of Wiki/TCMS and can sustain bigger number
> of test cases (i.e up to 300)

If the analysis shows that investing in wiki-based solution is the most
sustainable option forward, there are several solutions available
(http://www.mediawiki.org/wiki/Extension:Semantic_Forms and possibly
http://www.mediawiki.org/wiki/Extension:StructuredInput).  My impression
with Semantic was that while it may scratch the wiki form-input itch for
us, it's certainly not without a startup cost and I'm still unclear
whether that's the right tool for the job.  It's a fun experiment, but
without knowing what where our current gaps are, and where we want to
improve, it's hard to make a decision to invest in semantic-mediawiki
over another actively maintained upstream project.

> II stage
> 
> To ponder as the whole community the process of
> adding/creating/deleting/marking/prioritizing  test cases.

While it would be an interesting discussion, I don't know if we'd be
able to move forward with actionable work after a general community
ponder session.  I'd fear that would revolve too much around ponies [1].
I'm inclined to put emphasis on reviewing what we're doing now,
understand why we're doing it, document the pro's con's and prioritize
the MUSTHAVE features.

> P.S -i can try to prepare for the middle of the next week some draft
> to point 1 -it will be something simple but reflecting the idea.

Always welcome additional hands to help move the discussion forward!

Thanks,
James

> W dniu 2010-12-03 17:36, James Laska pisze: 
> > Just a few extra thoughts on the subject ...
> > 
> > On Fri, 2010-12-03 at 07:30 -0800, Adam Williamson wrote:
> > > On Fri, 2010-12-03 at 16:06 +0100, xcieja wrote:
> > > > Hi,
> > > > yes, you are right there are tests, but in my opinion they are in few 
> > > > different places under different  categories.
> > > That wasn't what I meant: I meant we already use the Wiki for the
> > > purposes you identified as an advantage of a TCMS (listing the tests
> > > that need to be performed in relation to some specific process, and
> > > whether they have already been performed, by whom, and with which
> > > result).
> > > 
> > > > I think we could organise them better -i.e create test category and put 
> > > > all of them instead of many places.
> > > We sure could, but we don't necessarily need a TCMS to do this. :) Note
> > > that we do try to keep them all within one Wiki namespace and we do use
> > > Wiki categories to organize some test cases.
> > > 
> > > > Moreover, i just have taken a look briefly and i see there are round 100 
> > > > test cases in total (please correct me if i am wrong).I think that for 
> > > > such project/system it is not enough at all.
> > > > 
> > > > We have big community, let`s assume everyone from QA create one test, we 
> > > > will have quite huge number of tests and obviously more faults detected 
> > > > before main release, less corrections after=better stability,usability-> 
> > > > better overall opinion.
> > > Sure, we can always do with more test cases.
> > More test cases/plans would certainly change the conversation a bit.  I
> > think we all want to increase the value that the Fedora QA team can
> > offer to the project.  One way to increase our value is by improving our
> > test coverage by way of test documentation (procedures, plans and
> > cases).  There are plenty of other ways ... but we can save those for
> > other threads.
> > 
> > I've always been hesitant to add tests for the sake of adding tests.
> > Test plans/cases are just like software.  If the tests aren't addressing
> > a priority issue, they won't be used as much, and like unused software,
> > will suffer from bit rot.  The best test cases/plans are the ones
> > frequently used, referenced and have maintainer buy-in.  Meaning, if the
> > tests fail, the maintainer cares.  I want to grow the library of tests
> > we maintain and run, but hopefully grow in a manner and pace that we, as
> > a community, can sustain.
> > 
> > With the test plans that Adam points to, I'm pretty confident in our
> > ability to develop, discuss/debate and execute desktop and installation
> > tests as a community.  We've ironed out the kinks in the workflow,
> > increased community engagement and developed good test plans as a
> > result.  My impression is we are ready for additional test areas.
> > 
> > That's what's exciting to me about the proventesters effort.  As you can
> > tell from recent (and old) devel@ list threads, testing proposed updates
> > is important work that's needed, requested by package maintainers and
> > well under-documentated.  I don't worry as much that tests written for
> > frequent proventester use will go stale given it's been a long-standing
> > exposure in the project.  Also, given the huge number of components in
> > Fedora, there is room for just about every contributor to participate
> > and carve out a niche.  But which tests do we prioritize first, where do
> > we write the tests, where to review+discuss them, how to run them etc...
> > (more on this later).
> > 
> > For me, these are two separate (but related) efforts.  TCMS is tool
> > designed to address specific workflow/tracking needs.  We also need to
> > determine how to best to sustainably expand the test coverage we can
> > offer to the project.  We have a wiki-based "TCMS" now.  It has met our
> > needs for the current set of organized test efforts.  It's not perfect,
> > but the return on the investment has been huge.  The questions I'd like
> > to see answered in ticket#152, is (1) whether the wiki can continue to
> > scale as our test management needs grow, and (2) what aspects of our
> > wiki-based TCMS are good/bad?
> > 
> > Thanks,
> > James

[1] http://i-want-a-pony.com/

Attachment: signature.asc
Description: This is a digitally signed message part

-- 
test mailing list
test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe: 
https://admin.fedoraproject.org/mailman/listinfo/test

[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux