On 25/07/19 18:39, Dan Rue wrote: > To your point Paolo - reporting 'fail' because of a missing kernel > feature is a generic problem we see across test suites, and causes tons > of pain and misery for CI people. As a general rule, I'd avoid > submodules, and even branches that track specific kernels. Rather, and I > don't know if it's possible in this case, but the best way to manage it > from both a test author and a test runner POV is to wrap the test in > kernel feature checks, kernel version checks, kernel config checks, etc. > Report 'skip' if the environment in which the test is running isn't > sufficient to run the test. Then, you only have to maintain one version > of the test suite, users can always use the latest, and critically: all > failures are actual failures. Note that kvm-unit-tests are not really testing new kernel features; those are covered by tools/testing/selftests/kvm. For some of these kvm-unit-tests there are some CPU features that we can check from the virtual machine, but those are easy to handle and they produce SKIP results just fine. The problematic ones are tests that cover emulation accuracy. These are effectively bugfixes, so the failures you see _are_ actual failures. At the same time, the bugs are usually inoffensive(*), while the fixes are invasive and a bad source of cause conflicts in older Linux versions. This combines so that backporting to stable is not feasible. Passing the host kernel version would be really ugly because 1) the tests can run on other hypervisor or emulators or even bare metal, and of course the host kernel version has no bearing if you're using userspace emulation 2) there are thousands of tests that would be littered with kernel version checks of little significance. So this is why I suggested a submodule: using a submodule effectively ignores all tests that were added after a given Linus release, and thus all the failures for which backports are just not going to happen. However, if Sean's idea of creating a linux-M.N branch in kvm-unit-tests.git works for you, we can also do that as a stopgap measure to ease your testing. Thanks, Paolo (*) if they aren't, we *do* mark them for backport!