On Mon, Sep 26, 2022 at 3:29 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Mon, Sep 26, 2022, David Matlack wrote: > > On Mon, Sep 26, 2022 at 2:45 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > I would much rather this be an assert, i.e. force the test to do TEST_REQUIRE(), > > > even if that means duplicate code. One of the roles of TEST_REQUIRE() is to > > > document test requirements. > > > > This gets difficult when you consider dirty_log_perf_test. Users can > > use dirty_log_perf_test in nested mode by passing "-n". But > > dirty_log_perf_test is an architecture-neutral test, so adding > > TEST_REQUIRE() there would require an ifdef, and then gets even more > > complicated if we add support for AMD or nested on non-x86 > > architectures. > > But you're going to have that problem regardless, e.g. perf_test_setup_nested() > already has > > TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); > > so why not also have the EPT requirement there? IMO, it's far less confusing if > the hard requirements are collected together, even if that one location isn't ideal. Ack, can do. > > If someone wants to improve the memstress framework, a hook can be added for probing > nested requirements. In other words, IMO this is a shortcoming of the memstress code, > not a valid reason to shove the requirement down in common code. Sorry I forgot to ask this in my previous reply... Why do you prefer to decouple test requirements from the test setup code? There is a maintenance burden to such an approach, so I want to understand the benefit. e.g. I forsee myself having to send patches in the future to add TEST_REQUIRE(kvm_cpu_has_ept()), because someone added a new VMX test and forgot to test with ept=N.