Re: lvm2-testsuite stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 19. 06. 23 v 20:22 Scott Moser napsal(a):
Hi, thanks for your response.

Yep - some tests are failing

expected-fail  api/dbustest.sh

We do have them even split to individual tests;
api/dbus_test_cache_lv_create.sh
api/dbus_test_log_file_option.
sh

That is not available upstream, right?
I just saw the single 'dbustest.sh' in
[main/test](https://github.com/lvmteam/lvm2/tree/master/test/api).
Is there another branch I should be looking at?

Correct - that's a local 'mod' for some test machines - but I'd like to get it
merged to upstream - although made in different way.

I'd likely need to get access/see  to the logs of such machines
(or you would need to provide as some downloadable image of you Qemu machine
installation)

The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
describes how I am doing this.  It is "standard" package build and autopkgtest
on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
so we don't have to worry about interaction with that.

I could provide a vm image if you were interested.


The tricky part with lvm2 is it's dependency on the proper 'udev' rule processing. Unfortunately Debian distro somewhat changes those rules on it's upstream package without deeper consultation with upstream and there were few more difference that upstream lvm2 doesn't consider as valid modification (though haven't checked the recent state)

Do others run this test-suite in automation and get reliabl results ?

Yes our VM machines do give reliable results for properly configured boxes.
Although as said before - there are some 'failing' tests we know about.


Identifying the set of tests that were allowed to fail in git
and gating pull requests on successful pass would be wonderful.  Without
some expected-working list, it is hard for me as a downstream user to
separate signal from noise.

There are no 'tests' allowed to fail.

There are either 'broken' tests or broken lvm2 code - but it's just not always exactly easy to fix some bugs and not enough hands to fix all issues quickly. So all failing tests do present some real problem from class a) or b) and should be fixed - it may have just lower priority with other tasks.


Would upstream be open to pull requests that added test suite running
  via github actions?  is there some other preferred mechanism for such a thing?

The test suite is really well done. I was surprised how well it insulates
itself from the system and how easy it was to use.  Running it in a
distro would give the distro developer a *huge* boost in confidence when
attempting to integrate a new LVM release into the distro.

Basically we are in decision point to move either to github or gitlab and add
this CI capabilities - but definitively some extra hands here might be helpful.


We would need to think much harder if the test should be running with
some daemons or autoactivation on the system that could see and could
interact with our devices generated during the test run (one of the
reasons machine for tests need some local modification - we may provide
some Ansible-like testing script eventually.

Autopkgtest will
  * start a new vm for each run of the tests
  * install the packages listed as dependencies of the test.
  * run the test "entrypoint" (debian/test/testsuite).

I think that I have debian/test/testsuite correctly shutting
down/masking the necessary system services before invoking the tests. As
suggested in TESTING.

I'm not sure what's the state of current udev rules - and these may impact some tests and possibly add some unexpected randomness

Another aspect of our test suite is the 'try-out' of various 'race' moments,
which may eventually need further tuning on even faster hardware to hit the race - but that might be possibly harder to 'set-up' if the VM are without 'ssh' access for a developer to enhance testing (it might be somewhat annoying trying to fix this with individual git commits)

If you are willing to help, I can post a vm image somewhere. I suspect

For at least initial diagnostics should be sufficient to just expose somewhere results from failing tests (content of failing tests in this subdir basically).

you're not working with debian or ubuntu on a daily basis.  If you had
access to a debian or ubuntu system it would probably be easiest to
just let autopkgtest do the running. autopkgtest does provide a
`--shell` and `--shell-fail` parameter to put you into a root shell
after the tests.

My ultimate goal is to provide a distro with confidence that the lvm2
package they're integrating is working correctly.  I'm ok to skip
tests that provide noisy results.  In this case, having *some*
reliable test is a huge improvement.

We were kind of trying to get some 'strange' deviation of Debian package fixed in past - however it seemed to lead nowhere... (Ideally all the 'needed' changes should be only set via configure option and there should be no need of any extra patch on Debian distro....)

Also note we have some Debian machine VM also part of our testing - although some very old version.

Zdenek

PS former Debian member ;)....

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux