On Wed, Apr 17, 2019 at 12:10:54AM -0700, Nadav Amit wrote: > > On Apr 16, 2019, at 11:57 PM, Andrew Jones <drjones@xxxxxxxxxx> wrote: > > > > On Tue, Apr 16, 2019 at 10:18:11PM -0700, nadav.amit@xxxxxxxxx wrote: > >> From: Nadav Amit <nadav.amit@xxxxxxxxx> > >> > >> Currently, if a test is expected to fail, but surprisingly it passes, > >> the test is considered as "failing". > > > > It's not "failing", it's failing. If a test is expected to pass then > > it shouldn't be getting reported with report_xfail(). > > I find this terminology confusing. For instance, there are some tests which > are probabilistic (e.g., test_sti_nmi) - let’s assume you expect one to fail > and it passes, would you say that you encountered a failure? When testing something probabilistic you should take enough samples that you can check for the expected frequency of the event. xfail means the test is expected to fail. If it doesn't fail then the software under test changed since the writing of the test. While it's possible that something got fixed (turning an xfail to a pass), it's also possible that the xfail started passing because something got even more broken then before. The big, fat FAIL alerts testers to it. > > > Why would one want to run old kvm-unit-tests on new kvm? > > I can think of a couple of reasons, but I am not going to argue too much. > I'm not arguing either. I'm curious. I can't really see why one would want to test with old kvm-unit-tests unless bisecting a problem with new kvm-unit-tests. And, if new kvm-unit-tests brings in some undesired dependency, then we should discuss removing it, rather than avoid the new versions. (That's just a hypothetical if, because I can't think of any new dependencies we've added.) Thanks, drew