Re: [PATCH 02/12] KVM: x86: Wake up a vCPU when kvm_check_nested_events fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 24, 2021, Paolo Bonzini wrote:
> On 24/05/21 18:39, Jim Mattson wrote:
> > Without this patch, the accompanying selftest never wakes up from HLT
> > in L2. If you can get the selftest to work without this patch, feel
> > free to drop it.
> 
> Ok, that's a pretty good reason.  I'll try to debug it.

I don't think there's any debug necessary, the hack of unconditionally calling
kvm_check_nested_events() in kvm_vcpu_running() handles the case where a pending
event/exception causes nested VM-Exit, but doesn't handle the case where KVM
can't immediately service the nested VM-Exit.  Because the event is never
serviced (doesn't cause a VM-Exit) and doesn't show up as a pending event,
kvm_vcpu_has_events() and thus kvm_arch_vcpu_runnable() will never become true,
i.e. the vCPU gets stuck in L2 HLT.

Until Jim's selftest, that was limited to the "request immediate exit" case,
which meant that to hit the bug a VM-Exiting event needed to collide with nested
VM-Enter that also put L2 into HLT state without a different pending wake event.

Jim's selftest adds a more direct path in the form of the -EXNIO when the PI
descriptor hits a memslot hole.

The proper fix is what we discussed in the other thread; get kvm_vcpu_has_events()
to return true if there's a nested event pending.

If I'm right, this hack-a-fix should go to stable.  Eww...

> > On Mon, May 24, 2021 at 8:43 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
> > > 
> > > On 21/05/21 01:03, Jim Mattson wrote:
> > > > At present, there are two reasons why kvm_check_nested_events may
> > > > return a non-zero value:
> > > > 
> > > > 1) we just emulated a shutdown VM-exit from L2 to L1.
> > > > 2) we need to perform an immediate VM-exit from vmcs02.
> > > > 
> > > > In either case, transition the vCPU to "running."
> > > > 
> > > > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx>
> > > > Reviewed-by: Oliver Upton <oupton@xxxxxxxxxx>
> > > > ---
> > > >    arch/x86/kvm/x86.c | 4 ++--
> > > >    1 file changed, 2 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > > index d517460db413..d3fea8ea3628 100644
> > > > --- a/arch/x86/kvm/x86.c
> > > > +++ b/arch/x86/kvm/x86.c
> > > > @@ -9468,8 +9468,8 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
> > > > 
> > > >    static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
> > > >    {
> > > > -     if (is_guest_mode(vcpu))
> > > > -             kvm_check_nested_events(vcpu);
> > > > +     if (is_guest_mode(vcpu) && kvm_check_nested_events(vcpu))
> > > > +             return true;
> > > 
> > > That doesn't make the vCPU running.  You still need to go through
> > > vcpu_block, which would properly update the vCPU's mp_state.
> > > 
> > > What is this patch fixing?
> > > 
> > > Paolo
> > > 
> > > >        return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
> > > >                !vcpu->arch.apf.halted);
> > > > 
> > > 
> > 
> 



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux