Re: Requiring package test instructions (was: Re: Too fast karma on Bodhi updates)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2016-07-13 at 11:08 +0530, Siddhesh Poyarekar wrote:
> On Tue, Jul 12, 2016 at 10:26:20PM -0700, Adam Williamson wrote:
> > 
> > It would not be 'a lot of work', it would be a gigantic, totally
> > unsustainable burden. I honestly think you're shooting *way* too high
> > here. Even with all the recent volunteers, we have like a couple dozen
> 
> I agree it is a massive task, which is why it hasn't gotten off the
> ground for glibc over the last year.  However I remain optimistic that
> someone someday will do at least a fraction of the automation :)

FWIW, as someone who is working on this, I don't think we can
realistically aim to do distribution-level automated testing with per-
package granularity. We actually have all the bits in place to do
something like that if we wanted to - I could have some kind of PoC
using existing openQA tests in a week or so - but I just kinda don't
think it's the way to go.

I think a more profitable angle at the distribution level is to define
what it is we actually think a distribution should do, and test whether
updates change *those* things. That's an appropriate and manageable
level of automated testing that we can actually achieve.

It doesn't necessarily plug into the current Bodhi design precisely,
but that's not a particularly good reason to pick one approach over
another.

Of course, we don't *have* to pick one thing or the other necessarily;
we can certainly provide all the appropriate hooks for packages to do
automated update testing, this is something folks are already looking
at, and there's no reason to stand in the way of maintainers / teams
who want to implement package-level automated tests for distribution
updates.

But from the perspective of the Fedora QA team, I don't think the best
thing we could do with our time is, you know, draw up a big list of
packages and start working down it, writing automated tests for one
package at a time. If we started doing that we might make some sort of
vaguely noticeable dent by, oh, say, 2026 or so. ;)

Personally I think it's more useful to do, well, the kinds of things
we're doing: *generic* package tests like the ones taskotron runs at
present (there are more we could do there), and automating the
requirements that are already encoded in the release criteria, which is
what we've been doing with openQA. If we can complete that task - or
get further along with it - so we can run a subset of openQA tests for
every update and say 'OK, well installing this update doesn't break the
high-level functions we define as "what the distribution should do"' -
I think that's interesting.

Just my thoughts, though.
-- 

Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
--
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://lists.fedoraproject.org/admin/lists/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux