Re: [kvm-unit-tests PATCH v4 3/3] x86: Add test coverage for nested_vmx_reflect_vmexit() testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 21, 2022 at 10:00 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Fri, Jan 21, 2022, Aaron Lewis wrote:
> > Add a framework and test cases to ensure exceptions that occur in L2 are
> > forwarded to the correct place by nested_vmx_reflect_vmexit().
> >
> > Add testing for exceptions: #GP, #UD, #DE, #DB, #BP, and #AC.
> >
> > Signed-off-by: Aaron Lewis <aaronlewis@xxxxxxxxxx>
> > Change-Id: I0196071571671f06165983b5055ed7382fa3e1fb
>
> Don't forget to strip the Change-Id before posting.

D'oh... Good catch.

>
> > ---
> >  x86/unittests.cfg |   9 +++-
> >  x86/vmx_tests.c   | 129 ++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 137 insertions(+), 1 deletion(-)
> >
> > diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> > index 9a70ba3..6ec7a98 100644
> > --- a/x86/unittests.cfg
> > +++ b/x86/unittests.cfg
> > @@ -288,7 +288,7 @@ arch = i386

> > +[vmx_exception_test]
> > +file = vmx.flat
> > +extra_params = -cpu max,+vmx -append vmx_exception_test
> > +arch = x86_64
> > +groups = vmx nested_exception
> > +timeout = 10
>
> Leave this out (for now), including it in the main "vmx" test is sufficient.
> I'm definitely in favor of splitting up the "vmx" behemoth, but it's probably
> best to do that in a separate commit/series so that we can waste time bikeshedding
> over how to organize things :-)
>

Why leave this out when vmx_pf_exception_test, vmx_pf_no_vpid_test,
vmx_pf_invvpid_test, and vmx_pf_vpid_test have their own?  They seem
similar to me.

> > +
> > +static uint64_t usermode_callback(void)
> > +{
> > +     /* Trigger an #AC by writing 8 bytes to a 4-byte aligned address. */
> > +     asm volatile(
> > +             "sub $0x10, %rsp\n\t"
> > +             "movq $0, 0x4(%rsp)\n\t"
> > +             "add $0x10, %rsp\n\t");
>
> Sorry, didn't look closely at this before.  This can simply be:
>
>         asm volatile("movq $0, 0x4(%rsp)\n\t");
>
> as the access is expected to fault.  Or if you want to be paranoid about not
> overwriting the stack:
>
>         asm volatile("movq $0, -0x4(%rsp)\n\t");
>
> It's probably also a good idea to call out that the stack is aligned on a 16-byte
> boundary.  If you were trying to guarnatee alignment, then you would need to use
> AND instead of SUB.  E.g.
>
>         asm volatile("push  %rbp\n\t"
>                      "movq  %rsp, %rbp\n\t"
>                      "andq  $-0x10, %rsp\n\t"
>                      "movq  $0, -0x4(%rsp)\n\t"
>                      "movq  %rbp, %rsp\n\t"
>                      "popq  %rbp\n\t");
>
> But my vote would be to just add a comment, I would consider it a test bug if the
> stack isn't properly aligned.
>

I can improve the comment, and I agree that it would be a test bug if
the stack isn't properly aligned.

I'll switch this to using the more paranoid approach if I'm not going
to modify RSP.

> > +
> > +     return 0;
> > +}
> > +



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux