On 17/04/19 09:10, Nadav Amit wrote: >> It's not "failing", it's failing. If a test is expected to pass then >> it shouldn't be getting reported with report_xfail(). > I find this terminology confusing. For instance, there are some tests which > are probabilistic (e.g., test_sti_nmi) - let’s assume you expect one to fail > and it passes, would you say that you encountered a failure? > Yes. :) Probabilistic tests should be changed so that the probability of an incorrect result is very, very small. XPASS are for known bugs or known virtualization holes, not for probabilistic tests. Basically, all they do is spare you from having to invert the result of the test, so that you can write // i actually isn't zero report_xfail("i should be zero", true, i == 0); instead of report_xfail("i should be zero, but isn't", i != 0); XPASS tests are a pleasant kind of failure, but still a surprise that should be inspected. There are several testsuite harnesses that fail on XPASS (dejagnu and meson, for example), and others that succeed on XPASS (for example "prove", the original TAP client). In other cases again it's customizable, for example PyTest ignores both XFAIL and XPASS results, but it does have an "xfail_strict" option where XPASS will cause the test suite to fail. Paolo