Re: QA testing tool for stable images.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2017-05-25 at 15:53 -0300, Julio Faracco wrote:
> Hi Adam,
> 
> Thanks for your fast response.
> In my opinion the Fedora Team has a excellent model of Open Source QA.
> Bodhi is an example of collaborative testing.
> 
> My question is: How do you test the images (ISOs)?
> Or better... how do you track the test results of the images?
> Do you use any kind of special system to track the results like bodhi?
> 
> The release 26 for example: https://fedoraproject.org/wiki/Releases/26/Schedule

Aha, I see. Then: yes, we have a system.

> Let me get my case. We have a similar schedule here (remember that I said that
> our Linux is a RHEL based). When we achieve the GA point we usually spread the
> ISOs to users/testers and they install the image into their machines. So we can
> cover several hardware configurations.
> 
> The problem is: we usually track all the checkpoints and results in a
> Document file.
> This is horrible and liable to error. So, I love the Fedora
> infra-sctructure, but I don't
> know how you validate the stability of ISOs. I have to admit that I
> try to study the
> Ubuntu QA model, but it does not fit our case.

You can read about our validation process here:
https://fedoraproject.org/wiki/QA:Release_validation_test_plan

Basically, every so often a compose (whether a nightly compose or a
specially built 'candidate compose') is 'nominated' for validation
testing, which basically means a bunch of wiki pages we usually call
'matrices' are created for it. The following URLs always redirect to
the 'current' validation page:

https://fedoraproject.org/wiki/Test_Results:Current_Installation_Test
https://fedoraproject.org/wiki/Test_Results:Current_Desktop_Test
https://fedoraproject.org/wiki/Test_Results:Current_Server_Test
https://fedoraproject.org/wiki/Test_Results:Current_Base_Test
https://fedoraproject.org/wiki/Test_Results:Current_Cloud_Test

the matrix pages contain tables, where each row contains the results
for some specific test. We run the tests and enter our results in the
wiki pages. Ultimately, when we come to release the milestones (Alpha,
Beta and Final), we examine the results for the RC compose we're
considering releasing, and make sure all the required tests for it have
been run and any blocker bugs they discovered have been addressed.

There are various bits of tooling around this system; the 'validation
event' creation process (creating the wiki pages, sending out an
announcement mail) is entirely automated, there is a library called
'python-wikitcms' and a CLI tool called 'relval' which allow you to
interact with the system in some ways (check existing results, enter
new results) without directly editing wiki pages, and there's a tool
called testcase_stats which provides a sort of 'longitudinal' (over
time) view of the results:

https://www.happyassassin.net/testcase_stats/26/

that makes it easier to figure out not just what tests have been run
for the most recent event, but how often all the tests have been run
throughout the current release cycle, and find tests which really need
running.

Some of the validation tests are also automated, these days, mainly in
our openQA system: https://openqa.fedoraproject.org/ . There is a
system in place that enters openQA's test results into the wiki - any
result you see from 'coconut' with a bot icon next to it indicates a
result that was taken from openQA testing.

Hope that helps!
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux