On Thu, Dec 09, 2021, Aaron Lewis wrote: > +static void vmx_exception_test_guest(void) > +{ > + handler old_gp = handle_exception(GP_VECTOR, vmx_exception_handler_gp); > + handler old_ud = handle_exception(UD_VECTOR, vmx_exception_handler_ud); > + handler old_de = handle_exception(DE_VECTOR, vmx_exception_handler_de); > + handler old_db = handle_exception(DB_VECTOR, vmx_exception_handler_db); > + handler old_bp = handle_exception(BP_VECTOR, vmx_exception_handler_bp); > + bool raised_vector = false; > + u64 old_cr0, old_rflags; > + > + asm volatile ( > + /* Return to L1 before starting the tests. */ > + "vmcall\n\t" > + > + /* #GP handled by L2*/ > + "mov %[val], %%cr0\n\t" > + "vmx_exception_test_skip_gp:\n\t" > + "vmcall\n\t" > + > + /* #GP handled by L1 */ > + "mov %[val], %%cr0\n\t" I would strongly prefer each of these be a standalone subtest in the sense that each test starts from a clean state, configures the environment as need, then triggers the exception and checks the results. I absolutely detest the tests that string a bunch of scenarios together, they inevitably accrue subtle dependencies between scenarios and are generally difficult/annoying to debug. Having a gigantic asm blob is also unnecessary. #GP can be generated with a non-canonical access purely in C. Ditto for #AC though that may or may not be more readable. #DE probably requires assembly to avoid compiler intervention. #UD and #BP should be short and sweet. E.g. It should be fairly straightforward to create a framework to handle running each test, a la the vmx_tests array. E.g. something like the below (completely untested). This way there's no need to skip instructions, thus no need for a exposing a bunch of labels. Each test is isolated, there's no code pairing between L0 and L1/L2, and adding new tests or running a specific test is trivial. static u8 vmx_exception_test_vector; static void vmx_exception_handler(struct ex_regs *regs) { report(regs->vector == vmx_exception_test_vector, "Handling %s in L2's exception handler", exception_mnemonic(vmx_exception_test_vector)); } static void vmx_gp_test_guest(void) { *(volatile u64 *)NONCANONICAL = 0; } static void handle_exception_in_l2(u8 vector) { handler old_handler = handle_exception(vector, vmx_exception_handler); u32 old_eb = vmcs_read(EXC_BITMAP); vmx_exception_test_vector = vector; vmcs_write(EXC_BITMAP, old_eb & ~(1u << vector)); enter_guest(); report(vmcs_read(EXI_REASON) == VMX_VMCALL, "%s handled by L2", exception_mnemonic(vector)); vmcs_write(EXC_BITMAP, old_eb); handle_exception(old_handler); } static void handle_exception_in_l1(u32 vector, const char *vector_name) { u32 old_eb = vmcs_read(EXC_BITMAP); vmx_exception_test_vector = 0xff; vmcs_write(EXC_BITMAP, old_eb | (1u << vector)); enter_guest(); report((vmcs_read(EXI_REASON) == VMX_EXC_NMI) && ((vmcs_read(EXI_INTR_INFO) & 0xff) == vector), "%s handled by L1", exception_mnemonic(vector)); vmcs_write(EXC_BITMAP, old_eb); } struct vmx_exception_test { u8 vector; void (*guest_code)(void); } struct vmx_exception_test vmx_exception_tests[] { { GP_VECTOR, vmx_gp_test_guest }, }; static void vmx_exception_test(void) { struct vmx_exception_test *t; handler old_ex; enter_guest(); assert_exit_reason(VMX_VMCALL); skip_exit_insn(); for (i = 0; i < ARRAY_SIZE(vmx_exception_tests); i++) { t = &vmx_exception_tests[i]; test_set_guest(t->guest_code); handle_exception_in_l2(t->vector); handle_exception_in_l1(t->vector); } }