Re: [PATCH 1/2] KVM: x86/mmu: Protect marking SPs unsync when using TDP MMU with spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 11, 2021, Paolo Bonzini wrote:
> On 11/08/21 00:45, Sean Christopherson wrote:
> > Use an entirely new spinlock even though piggybacking tdp_mmu_pages_lock
> > would functionally be ok.  Usurping the lock could degrade performance when
> > building upper level page tables on different vCPUs, especially since the
> > unsync flow could hold the lock for a comparatively long time depending on
> > the number of indirect shadow pages and the depth of the paging tree.
> 
> If we are to introduce a new spinlock, do we need to make it conditional and
> pass it around like this?  It would be simpler to just take it everywhere
> (just like, in patch 2, passing "shared == true" to tdp_mmu_link_page is
> always safe anyway).

It's definitely not necessary to pass it around.  I liked this approach because
the lock is directly referenced only by the TDP MMU.

My runner up was to key off of is_tdp_mmu_enabled(), which is not strictly
necessary, but I didn't like checking is_tdp_mmu() this far down the call chain.
E.g. minus comments and lockdeps

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d574c68cbc5c..651256a10cb9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2594,6 +2594,8 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
  */
 int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync)
 {
+       bool tdp_mmu = is_tdp_mmu_enabled(vcpu->kvm);
+       bool write_locked = !tdp_mmu;
        struct kvm_mmu_page *sp;

        /*
@@ -2617,9 +2619,19 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync)
                if (sp->unsync)
                        continue;

+               if (!write_locked) {
+                       write_locked = true;
+                       spin_lock(&vcpu->kvm->arch.tdp_mmu_unsync_pages_lock);
+
+                       if (READ_ONCE(sp->unsync))
+                               continue;
+               }
+
                WARN_ON(sp->role.level != PG_LEVEL_4K);
                kvm_unsync_page(vcpu, sp);
        }
+       if (tdp_mmu && write_locked)
+               spin_unlock(&vcpu->kvm->arch.tdp_mmu_unsync_pages_lock);

        /*
         * We need to ensure that the marking of unsync pages is visible



All that said, I do not have a strong preference.  Were you thinking something
like this?

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d574c68cbc5c..b622e8a13b8b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2595,6 +2595,7 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync)
 {
        struct kvm_mmu_page *sp;
+       bool locked = false;

        /*
         * Force write-protection if the page is being tracked.  Note, the page
@@ -2617,9 +2618,34 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync)
                if (sp->unsync)
                        continue;

+               /*
+                * TDP MMU page faults require an additional spinlock as they
+                * run with mmu_lock held for read, not write, and the unsync
+                * logic is not thread safe.  Take the spinklock regardless of
+                * the MMU type to avoid extra conditionals/parameters, there's
+                * no meaningful penalty if mmu_lock is held for write.
+                */
+               if (!locked) {
+                       locked = true;
+                       spin_lock(&kvm->arch.mmu_unsync_pages_lock);
+
+                       /*
+                        * Recheck after taking the spinlock, a different vCPU
+                        * may have since marked the page unsync.  A false
+                        * positive on the unprotected check above is not
+                        * possible as clearing sp->unsync _must_ hold mmu_lock
+                        * for write, i.e. unsync cannot transition from 0->1
+                        * while this CPU holds mmu_lock for read.
+                        */
+                       if (READ_ONCE(sp->unsync))
+                               continue;
+               }
+
                WARN_ON(sp->role.level != PG_LEVEL_4K);
                kvm_unsync_page(vcpu, sp);
        }
+       if (locked)
+               spin_unlock(&kvm->arch.mmu_unsync_pages_lock);

        /*
         * We need to ensure that the marking of unsync pages is visible



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux