Re: [PATCH v4 1/9] KVM: x86/mmu: Bug the VM if KVM attempts to double count an NX huge page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 21, 2022, Sean Christopherson wrote:
> On Wed, Sep 21, 2022, Vitaly Kuznetsov wrote:
> > [  962.257992]  ept_fetch+0x504/0x5a0 [kvm]
> > [  962.261959]  ept_page_fault+0x2d7/0x300 [kvm]
> > [  962.287701]  kvm_mmu_page_fault+0x258/0x290 [kvm]
> > [  962.292451]  vmx_handle_exit+0xe/0x40 [kvm_intel]
> > [  962.297173]  vcpu_enter_guest+0x665/0xfc0 [kvm]
> > [  962.307580]  vcpu_run+0x33/0x250 [kvm]
> > [  962.311367]  kvm_arch_vcpu_ioctl_run+0xf7/0x460 [kvm]
> > [  962.316456]  kvm_vcpu_ioctl+0x271/0x670 [kvm]
> > [  962.320843]  __x64_sys_ioctl+0x87/0xc0
> > [  962.324602]  do_syscall_64+0x38/0x90
> > [  962.328192]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> 
> Ugh, past me completely forgot the basics of shadow paging[*].  The shadow MMU
> can reuse existing shadow pages, whereas the TDP MMU always links in new pages.
> 
> I got turned around by the "doesn't exist" check, which only means "is there
> already a _SPTE_ here", not "is there an existing SP for the target gfn+role that
> can be used".
> 
> I'll drop the series from the queue, send a new pull request, and spin a v5
> targeting 6.2, which amusing will look a lot like v1...

Huh.  I was expecting more churn, but dropping the offending patch and then
"reworking" the series yields a very trivial overall diff.  

Vitaly, can you easily re-test with the below, i.e. simply delete the KVM_BUG_ON()?
I'll still spin a v5, but assuming all is well I think this can go into 6.1 and
not get pushed out to 6.2.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 54ee48a87f81..e6f19e605979 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -804,7 +804,15 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
-       if (KVM_BUG_ON(!list_empty(&sp->possible_nx_huge_page_link), kvm))
+       /*
+        * If it's possible to replace the shadow page with an NX huge page,
+        * i.e. if the shadow page is the only thing currently preventing KVM
+        * from using a huge page, add the shadow page to the list of "to be
+        * zapped for NX recovery" pages.  Note, the shadow page can already be
+        * on the list if KVM is reusing an existing shadow page, i.e. if KVM
+        * links a shadow page at multiple points.
+        */
+       if (!list_empty(&sp->possible_nx_huge_page_link))
                return;
 
        ++kvm->stat.nx_lpage_splits;




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux