Running kickstart-tests in openQA: should we?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey folks! My silly weekend project is now at the point where I figure
we should either decide what it's good for, or stop wasting time on it.

So I wrote a thing which runs the anacaonda 'kickstart-tests' in
openQA. You can find it on the kickstart-tests branch of openqa_fedora:

https://bitbucket.org/rajcze/openqa_fedora/branch/kickstart-tests

## What does it actually do?

Very roughly, it:

1. takes the kickstart-tests .ks.in and .sh files
2. produces .ks files (which you are expected to upload to some web server)
3. produces one openQA 'test suite' (recipe) for each kickstart-test
4. merges the 'test suites' into some existing file (which you then load to an openQA instance)

## What doesn't it do?

1. Produce the driver disk image, or install.img (you can build the
   driver disk and place it in openQA's 'hdds' location; install.img
   test is not handled)
2. Produce the test repos (you are expected to do that, make them
   available, and feed the locations to the converter)
3. Work with a few tests (all probably fixable, but at this point I
   don't want to invest more time without a few decisions)
4. Give you an openQA instance or actually run the tests (you're
   expected to bring your own instance and trigger the tests)

## What does it look like?

Kinda like this:

https://openqa.happyassassin.net/tests/overview?version=24&groupid=1&build=Fedora-24-20160322.4-KSTESTS&distri=fedora

Quick notes there: the driverdisk test hard fails because I haven't
bothered to generate the image and send it to my openQA server yet. The
ntp and raid-1 test fails are actually genuine fails - the tests are
testing things that have changed in anaconda since F24 Alpha, so the
fails are perfectly correct. 'container' is now on the 'tests skipped
because I know they don't work' list, but it shows up in the UI because
I ran and cancelled it once before that. You can click the green dots
to see the detailed report for each test.

## What does it 'replace'?

It takes the place of run_kickstart_tests.sh , run_one_ks.sh
and kstest-runner , pretty much, by using openQA to do all the work of
spinning up VMs and running installs and gathering results.

## How do I use it?

Roughly:

git clone https://bitbucket.org/rajcze/openqa_fedora.git
cd openqa_fedora
git checkout kickstart-tests
git clone https://github.com/rhinstaller/kickstart-tests
./kstest-converter kickstarts kickstart-tests http://www/ks http://www/http (NFS_REPO_URL) ks/
scp ks/*.ks www:/var/www/html/ks/
PYTHONPATH=kickstart-tests/lib kickstart-tests/scripts/make-addon-pkgs.py
scp -pr http www:/var/www/html/
./perl2json templates templates.json
./kstest-converter templates kickstart-tests/ (KS_URL) templates.json merged.json
scp merged.json openqa:/tmp
ssh openqa
cd /var/lib/openqa/share/tests
git clone https://bitbucket.org/rajcze/openqa_fedora.git
cd openqa_fedora
git checkout kickstart-tests
cd /var/lib/openqa/share/factory/iso
wget https://dl.fedoraproject.org/pub/alt/stage/24_Alpha-1.7/Server/x86_64/iso/Fedora-Server-netinst-x86_64-24_Alpha-1.7.iso
/usr/share/openqa/script/load_templates /tmp/merged.json --clean
/usr/share/openqa/script/client isos post ISO=Fedora-Server-netinst-x86_64-24_Alpha-1.7.iso DISTRI=fedora VERSION=24 FLAVOR=kstests ARCH=x86_64 BUILD=Fedora-24-20160322.4-KSTESTS

I've assumed you've got a dumb web server called 'www' with a document
root of /var/www/html and an openqa server called 'openqa'.

in the kstest-converter kickstarts command, the first arg is a URL
where the generated kickstart files will be accessible; the second is a
URL where the HTTP test repo will be available; the third is a URL
where the NFS test repo will be available (though in fact that's one of
the tests I have disabled ATM so you can just type whatever you like,
it won't be used). You can optionally pass --http to override the
default HTTP base repo URL (which for kstest-converter is the official
mirrorlist URL), and --ftp to override the default FTP base repo URL
(which is that Texas mirror, same as kickstart-tests' default).

The openQA commands are standard ones for loading the templates file
and kicking off tests on a given ISO; in a real deployment that stuff
would be done automatically. For now I've made the kickstart tests
their own 'flavor', but that's easily changeable. I've also been using
a BUILD value with -KSTESTS on the end just so it's easy to identify
the tests in the openQA UI.

## What are the possible benefits of this over the 'official' runner?

The most obvious benefit, I guess, is simply that doing this or
something like it would get anaconda out of the business of maintaining
a test runner written in shell script. Other than that...

Using openQA has some other potential benefits. You get a video of the
test run, which is nice. We have a fairly sophisticated openQA
scheduler code base available which could potentially be used to
trigger the tests in all sorts of situations, though I don't know how
this compares to the use of Jenkins with the official runner. openQA
provides various capabilities for getting information out of both
passed and failed tests; for now I don't have this hooked up as well as
I could (uploading anaconda logs on failure doesn't work because the
way Fedora's openQA tests are currently set up, the test running during
a kickstart install is not considered an 'anaconda' test). But it's
pretty easy to fix that.

The way I implemented the openQA integration it actually *boots the
installed system* to check the test result - it doesn't just grab the
RESULT file out of the disk image like the official runner. So this
approach actually tests that the installed system boots, which is, you
know, probably a good thing. (It also avoids some of the hoop-jumping
some of the tests have to do to get at the RESULT file, because in the
booted system, it's always just /root/RESULT except that one test where
it's /home/RESULT).

openQA also gives you a pretty good experience for storing and querying
the results, though again I don't know how that compares to the Jenkins
setup.

I don't know how good the scalability story is with the 'official'
runner; I can see that it runs multiple tests at a time and can even
spread them across machines with parallel, but I dunno how robust or
scalable that is. openQA is getting relatively good at scalability;
it's tied to a single controller / web UI box, but you can keep scaling
up the actual test runners by just adding more and more 'worker host'
boxes. Taskotron may be even better at this, though.

openQA has fedmsg integration, which is nice.

## How well do the tests plug in to openQA?

I'd say 'reasonably well'. I had the basic logic up and running in a
couple of hours, it's all been detail work since then. I take quite a
few short cuts in parsing the .sh files to create the test suites,
there's quite a few sloppy assumptions in there, but that all could be
firmed up.

There are a few interesting points of difference. The most obvious is
the way the official runner can set up kinda transient assets for each
test: spinning up an HTTP server to serve the test repo just for the
length of the test execution, for instance. openQA's design doesn't
*really* provide an easy way to do that. It'd be possible but maybe
require upstream patches to openQA and it might not be the best idea.
So in my design, the idea is that you just make the test repositories
*permanently* available somehow. If we want to run the tests on the
official Fedora deployment of openQA, it's easy to do that and similar
prep bits in the ansible plays. Similarly, where the official runner
produces the 'final' kickstart files per-test, in the prepare() step,
this approach produces 'final' kickstarts one time and makes them
permanently available. Again the idea is that this step can be
ansiblized, so to update the tests we'd just run the ansible plays.

## What could we possibly do with this?

There's a few options.

1) We could keep the jenkins+official runner setup that anaconda's
using for doing testing at the level of the anaconda project, and just
use this to run the kickstart tests at the Fedora QA distribution
testing level. So that whenever we (QA) run the openQA tests for
whatever reason, the kickstart tests get run, but keep it as a QA
thing.

This is kinda the easiest to do short term (especially the easiest for
anaconda folks as you pretty much have to do nothing), but medium and
long term, it might be a bit more error prone than the other
approaches. I suspect the way it would work out, anaconda team would
really only care about keeping the tests running in the 'official'
runner, and we (QA) would have to spend quite a bit of time keeping the
converter in line with the official runner's behaviour.

We'd also probably have to get a bit more hardware from somewhere for
this case, because suddenly adding another ~50 tests to the set of
tests run in the current QA instance will make its test runs quite a
lot longer; if we get another box or two we can add more workers to
compensate for this.

2) We could try to have QA act as a 'service provider' for anaconda to
run the tests in openQA: basically in this scenario we'd give you the
necessary credentials to trigger openQA tests in an openQA instance
that's owned/maintained by QA. There would be a lot of details to
figure out in this case.

3) We could set anaconda up with its own openQA instance for running
the tests, and we'd just share tooling stuff like the ansible plays and
the openQA scheduling code (however much turns out to be in common).
This would mean someone in anaconda team being basically familiar with
care and feeding of an openQA instance, which isn't that hard. You'd
also need the hardware to run it on.

In scenarios 2) and 3), I guess we'd burn down the current kickstart-
tests runner, and we could start tweaking the way the tests are
implemented (in terms of how prep and asset creation are specified and
actually carried out) to fit in more nicely with openQA's design. In
scenario 1), kickstart-tests would be pretty much unmodified and we'd
have to keep working on the converter to keep its behaviour in line
with the official stuff. We *could* invent some kind of intermediate
format for specifying test requirements which would be interpreted
differently by the openQA and 'native' runners, but I suspect that
would sound like a lot of engineering.

## What other choices are there besides using openQA?

1) Forget the whole damn thing and just keep on with our separate
approaches. I'm sending this email now because I'm still at the point
where I'd be fine with this decision; I've only burned 3 days on this
work and I learned a lot about the kickstart-tests while doing it.

2) Keep the separate testing approaches, but possibly look at
harmonizing test image generation and test triggering and maybe result
storage somehow.

2) Look at running the kickstart-tests in Taskotron (possibly along
with Beaker?) instead of openQA. tflink is quite interested in this, I
think, and I *may* give it a shot. No promises.

3) Wait for the possibly-inevitable introduction of some sort of
'Central CI'-ish Jenkins thing into Fedora and then see if maybe we
want to use that for running these tests.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net

_______________________________________________
Anaconda-devel-list mailing list
Anaconda-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/anaconda-devel-list




[Index of Archives]     [Kickstart]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]
  Powered by Linux