Re: [Request To Review] Testcases for dnf modular

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2017-11-14 at 13:25 -0800, Adam Williamson wrote:
> On Tue, 2017-11-14 at 05:04 +0000, Nick Coghlan wrote:
> > > Dear Sumantro,
> > > 
> > > Did you talk to DNF QE team? I suggest you coordinate your work with Karel Srot from DNF
> > > QE team. There is a tool to test DNF and I know Karel already has a good set of tests
> > > which covers Modular side of DNF [1], just missing some user stories.
> > 
> > We're also trying to keep track of the places where automated testing
> > happens as part of the Fedora module developer's guide at https://doc
> > s.pagure.org/modularity/development/building-modules/testing.html
> > 
> > I think the key item we have there that hasn't been mentioned in the
> > thread yet is your own automated tests at https://github.com/fedora-m
> > odularity/compose-tests
> > 
> > Cheers,
> > Nick.
> > 
> > P.S. It's entirely plausible that we're missing some aspects of the
> > automated testing setup for the modular server builds, so issue
> > reports and PRs to correct omissions at https://pagure.io/modularity
> > are most welcome.
> 
> Note that there is a specific purpose to these test cases: they are for
> release validation testing. Release validation testing is about
> ensuring critical functionality works *as experienced by end users*.
> That is, for release validation testing, we want to test that the most
> common and critical interactions with modules that will be performed by
> admins of actual Fedora Server installations work, exactly as those
> admins would perform them. These tests need to run from clean installs
> of 'real' composes, and involve using the tools as real-world users
> will use them. So any tests that involve more artificial environments
> or interactions are not as useful for release validation purposes.
> Tests that test "too much" are also not as useful, because we don't
> immediately know whether a failure is something critical to the release
> process.
> 
> So, we can certainly look at existing tests / tools and see if this
> testing can share resources with them, but it needs to take the above
> considerations into account.

So let's take a concrete example...the ci-dnf-stack tests Irina linked to:
https://github.com/rpm-software-management/ci-dnf-stack/tree/modularity

These are great as DNF CI tests, for sure. But they're not as useful
for release validation.

Typically, tests at this level want to isolate themselves from external
environments. So this test suite contains its own packages and
repositories, and the test process uses those. This means it can run in
a contained environment and the results will not be affected by changes
to packages or repositories that are outside of the test process. Which
is exactly what you *want* for this kind of CI testing; you're trying
to test that dnf / librepo / hawkey / etc. behaviour is as intended,
you don't want external factors to change the results between runs.

However, for distribution-level validation testing, we absolutely
*don't* want that. We want to test the actual DNF configuration,
repositories and modules we ship with the composes, and we *want to
know* if something is broken in those. Just testing that DNF behaves as
expected with this set of prefabbed test packages / repositories /
modules is not as useful.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
_______________________________________________
test mailing list -- test@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to test-leave@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Desktop]     [Fedora SELinux]     [Photo Sharing]     [Yosemite Forum]     [KDE Users]

  Powered by Linux