Re: In A World Where...TCs don't exist?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2016-01-29 at 09:26 -0500, Kamil Paral wrote:

> Here's a question. Are we going to "nominate" only those composes in
> which a substantial component changed (i.e. anaconda or systemd),
> similarly to what we do now in rawhide, or are we going to nominate
> each new compose (i.e. one or more per day)?

That's definitely something to consider, yeah. It's logic that's quite
easy to tweak.

>  The first approach seems simpler for humans, but I can't imagine how
> we make it work for e.g. Desktop matrices - there's so many
> components in there that we would probably end up nominating every
> day anyway.

Well, I intentionally never tried to extend the list of 'significant
packages' to every single one which could *possibly* cause anaconda's
behaviour to change, and I wouldn't suggest it would make sense to do
that for GNOME either. Really it just seemed like a neat way of
regulating the flow of nominated composes. Note the mechanism is a bit
more complex than you mentioned, there are a pair of time constraints:
it *always* waits at least three days between nominations, and if two
weeks go by without a 'significant' package change it'll go ahead and
nominate anyway (that may have kicked in once :>).

>  The second approach means we would let automation do its job and
> humans would have to rely mainly on testcase_stats to see which test
> cases were recently tested and which were not, and test according to
> that. I think the second approach is something that we should aim for
> in the future, but I'm not sure we're there yet. It will certainly
> require some larger changes in testcase_stats to make sure they
> correctly represent everything (now that we'll rely solely on that),
> e.g. not squashing different test environments together into a single
> result, etc.

This is broadly my take, yeah. Honestly, I think it might be time to go
back into the test framework jungle, though we might actually wind up
in the dreaded 'build our own' position this time. I've been vaguely
thinking about a system to consolidate automated and manual test
results into resultsdb. So we'd have something that would submit
results from autocloud and openQA to resultsdb, and we'd build some
kind of client (webapp or whatever) for submission of manual test
results, and displaying all the combined results from automated test
systems and manual testers.

In my mind this system doesn't actually store or display test cases;
they stay in the wiki. Each test case has a permanent ID and a
changeable URL, so we can rename test cases where appropriate. The new
bits would simply link out to the wiki where appropriate.

It's still just a concept for now, but that's kinda where my mind's
going...WDYT? Do you see more mileage in extending testcase_stats?
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net

--
test mailing list
test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe:
http://lists.fedoraproject.org/admin/lists/test@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux