Re: [PATCH v2] KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 24, 2023 at 05:36:37PM -0700, Sean Christopherson wrote:
> On Mon, Apr 24, 2023, David Matlack wrote:
> > It'd be nice to keep around the lockdep assertion though for the other (and
> > future) callers. The cleanest options I can think of are:
> > 
> > 1. Pass in a bool "vm_teardown" kvm_tdp_mmu_invalidate_all_roots() and
> > use that to gate the lockdep assertion.
> > 2. Take the mmu_lock for read in kvm_mmu_uninit_tdp_mmu() and pass
> > down bool shared to kvm_tdp_mmu_invalidate_all_roots().
> > 
> > Both would satisfy your concern of not blocking teardown on the async
> > worker and my concern of keeping the lockdep check. I think I prefer
> > (1) since, as you point out, taking the mmu_lock at all is
> > unnecessary.
> 
> Hmm, another option:
> 
>  3. Refactor the code so that kvm_arch_init_vm() doesn't call
>     kvm_tdp_mmu_invalidate_all_roots() when VM creation fails, and then lockdep
>     can ignore on users_count==0 without hitting the false positive.
> 
> I like (2) the least.  Not sure I prefer (1) versus (3).  I dislike passing bools
> just to ignore lockdep, but reworking code for a "never hit in practice" edge case
> is arguably worse :-/

Agree (2) is the worst option. (3) seems potentially brittle (likely to
trigger a false-positive lockdep warning if the code ever gets
refactored back).

How about throwing some underscores at the problem?

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 649e1773baf1..3e00afc31c71 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -38,6 +38,8 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
 	return true;
 }
 
+static void __kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
+
 void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
 {
 	/*
@@ -45,7 +47,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
 	 * for zapping and thus puts the TDP MMU's reference to each root, i.e.
 	 * ultimately frees all roots.
 	 */
-	kvm_tdp_mmu_invalidate_all_roots(kvm);
+	__kvm_tdp_mmu_invalidate_all_roots(kvm);
 
 	/*
 	 * Destroying a workqueue also first flushes the workqueue, i.e. no
@@ -1004,7 +1006,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
  * Note, the asynchronous worker is gifted the TDP MMU's reference.
  * See kvm_tdp_mmu_get_vcpu_root_hpa().
  */
-void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+static void __kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
 {
 	struct kvm_mmu_page *root;
 
@@ -1026,6 +1028,12 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
 	rcu_read_unlock();
 }
 
+void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+{
+	lockdep_assert_held_write(&kvm->mmu_lock);
+	__kvm_tdp_mmu_invalidate_all_roots(kvm);
+}
+
 /*
  * Installs a last-level SPTE to handle a TDP page fault.
  * (NPT/EPT violation/misconfiguration)



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux