Re: [PATCH 2/8] KVM: x86: extend struct kvm_vcpu_pv_apf_data with token info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 15, 2020 at 09:18:07PM +0200, Paolo Bonzini wrote:
> On 15/05/20 20:46, Sean Christopherson wrote:
> > Why even bother using 'struct kvm_vcpu_pv_apf_data' for the #PF case?  VMX
> > only requires error_code[31:16]==0 and SVM doesn't vet it at all, i.e. we
> > can (ab)use the error code to indicate an async #PF by setting it to an
> > impossible value, e.g. 0xaaaa (a is for async!).  That partciular error code
> > is even enforced by the SDM, which states:
> 
> Possibly, but it's water under the bridge now.

Why is that?  I thought we were redoing the entire thing because the current
ABI is unfixably broken?  In other words, since the guest needs to change,
why are we keeping any of the current async #PF pieces?  E.g. why keep using
#PF instead of usurping something like #NP?

> And the #PF mechanism also has the problem with NMIs that happen before the
> error code is read and page faults happening in the handler.

Hrm, I think there's no unfixable problem except for a pathological
#PF->NMI->#DB->#PF scenario.  But it is a problem :-(

FWIW, the error code isn't a problem, CR2 is the killer.  The issue Andy
originally pointed out is

  #PF: async page fault (KVM_PV_REASON_PAGE_NOT_PRESENT)
     NMI: before CR2 or KVM_PF_REASON_PAGE_NOT_PRESENT
       #PF: normal page fault (NMI handler accesses user memory, e.g. perf)

With current async #PF, the problem is that CR2 and apf_reason are out of
sync, not that CR2 or the error code are lost.  E.g. the above could also
happen with a regular #PF on both ends, and obviously that works just fine.

In other words, the error code doesn't suffer the same problem because it's
pushed on the stack, not shoved into a static memory location.

CR2 is the real problem, even though it's saved by the NMI handler.  The
simple case where the NMI handler hits an async #PF before it can save CR2
is avoidable by KVM not injecting #PF if NMIs are blocked.  The pathological
case would be if there's a #DB at the beginning of the NMI handler; the IRET
from the #DB would unblock NMIs and then open up the guest to hitting an
async #PF on the NMI handler before CR2 is saved by the guest. :-(



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux