Release validation NG: very early thoughts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey, folks!

So I put a kinda mystery item on the meeting agenda for Monday. I
thought I'd throw my initial thoughts on the topic out to the list so
we can bat it around a bit before the meeting.

I would like us to build a better system for release validation testing
than the wiki. This has been kinda-sorta on the agenda for a long time,
but we've never quite gotten anywhere with it.

We did look into using Moztrap - https://github.com/mozilla/moztrap -
for a while, and got quite far into a PoC, but a) we kinda moved on to
other things and b) while it was the best option we'd found thus far,
it still wasn't entirely a 'slam dunk'. It has not seen a commit since
December 2015.

The problems I'd like to solve here are:

1, 2, 3, 4, 5. Reporting validation test results sucks for humans
6. Maintaining the wiki system is something only I know how to do
7. Using wiki pages for storage is slow, fragile and makes analysis hard
8. Using wiki pages for storage is difficult to integrate with other
things (ResultsDB)

I've kinda wanted to start working on this for a while, but improving
openQA and adding more tests has always seemed a bit more of a win. But
we're now at a point where openQA is getting quite mature, and openQA
stuff is actually starting to *overlap* with this a bit; part of this
project would involve just no longer asking humans to test things
openQA, Taskotron and Autocloud test well, leaving human effort for the
things they don't cover.

In a way what I'd *really* like to build is a super-amazing QA Front
End thing where all our test results - human-generated and from all
automation systems - live in ResultsDB, and there's a great webapp that
allows you to view, analyze and submit those results. But that's a
pretty big project. I think it makes sense to start with one area and
work up.

I want to start with human release validation because it's a) important
and b) kind of a bad experience at present. For updates testing, Bodhi
works decently and there's fedora-easy-karma for efficient multiple
report submission. For Test Days there's the testdays webapp, which
isn't the greatest thing ever but still beats the pants off editing the
wiki by hand (or using relval report-results). So it's kind of the
obvious place to start.

My very early thoughts on what this would involve are:

1. Do the work to allow human validation results to be submitted to
ResultsDB (this basically involves coming up with an appropriate schema
for test cases, test instances and results data)

2. Build a webapp that lets you submit results much more easily,
happily and reliably than either editing the wiki or using relval

3. Build out display/analysis of the results in the webapp, replacing
'reading the wiki pages' and also stuff like relval user-stats and
testcase-stats

Consolidating openQA, Autocloud etc. results on the display/analysis
side would be a stretch goal, but the schema for *storing* the results
should be set up such that we can do this if we want to - i.e. it
should be possible to figure out 'all these results from different
sources are part of the validation testing for this specific compose'.

So basically I'd love to hear the following from you lovely people:

1. What do you think of the idea?
2. Do you have any improvements/refinements/enhancements on my plan?
3. Do you want to help out? :)

Thanks everyone!
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux