Re: [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/16/23 02:39, Sean Christopherson wrote:
Replace the address space ID in for_each_tdp_mmu_root_yield_safe() with a
shared (vs. exclusive) param, and have the walker iterate over all address
spaces as all callers want to process all address spaces.  Drop the @as_id
param as well as the manual address space iteration in callers.

Add the @shared param even though the two current callers pass "false"
unconditionally, as the main reason for refactoring the walker is to
simplify using it to zap invalid TDP MMU roots, which is done with
mmu_lock held for read.

Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>

You konw what, I don't really like the "bool shared" arguments anymore.
For example, neither tdp_mmu_next_root nor kvm_tdp_mmu_put_root need to know if the lock is taken for read or write; protection is achieved via RCU and tdp_mmu_pages_lock. It's more self-documenting to remove the argument and assert that the lock is taken.

Likewise, the argument is more or less unnecessary in the for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for the lock before calling it; and all of them either call small functions that do the check, or end up calling tdp_mmu_set_spte_atomic() and tdp_mmu_iter_set_spte(), so the per-iteration checks are also overkill.

It may be useful to a few assertions to make up for the lost check before the first execution of the body of for_each_*_tdp_mmu_root_yield_safe(), but even this is more for documentation reasons than to catch actual bugs.

I'll send a v2.

Paolo

---
  arch/x86/kvm/mmu/mmu.c     |  8 ++------
  arch/x86/kvm/mmu/tdp_mmu.c | 20 ++++++++++----------
  arch/x86/kvm/mmu/tdp_mmu.h |  3 +--
  3 files changed, 13 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 59f5e40b8f55..54f94f644b42 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6246,7 +6246,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e
  void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
  {
  	bool flush;
-	int i;
if (WARN_ON_ONCE(gfn_end <= gfn_start))
  		return;
@@ -6257,11 +6256,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); - if (tdp_mmu_enabled) {
-		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
-			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
-						      gfn_end, flush);
-	}
+	if (tdp_mmu_enabled)
+		flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush);
if (flush)
  		kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 89aaa2463373..7cb1902ae032 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -211,8 +211,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
  #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)	\
  	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \
-	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false)
+#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared)			\
+	for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false);		\
+	     _root;								\
+	     _root = tdp_mmu_next_root(_kvm, _root, _shared, false))		\
+		if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) {		\
+		} else
/*
   * Iterate over all TDP MMU roots.  Requires that mmu_lock be held for write,
@@ -877,12 +881,11 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
   * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or
   * more SPTEs were zapped since the MMU lock was last acquired.
   */
-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
-			   bool flush)
+bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
  {
  	struct kvm_mmu_page *root;
- for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
+	for_each_tdp_mmu_root_yield_safe(kvm, root, false)
  		flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
return flush;
@@ -891,7 +894,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
  void kvm_tdp_mmu_zap_all(struct kvm *kvm)
  {
  	struct kvm_mmu_page *root;
-	int i;
/*
  	 * Zap all roots, including invalid roots, as all SPTEs must be dropped
@@ -905,10 +907,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
  	 * is being destroyed or the userspace VMM has exited.  In both cases,
  	 * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
  	 */
-	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
-		for_each_tdp_mmu_root_yield_safe(kvm, root, i)
-			tdp_mmu_zap_root(kvm, root, false);
-	}
+	for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+		tdp_mmu_zap_root(kvm, root, false);
  }
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index eb4fa345d3a4..bc088953f929 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -20,8 +20,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
  void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
  			  bool shared);
-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
-			   bool flush);
+bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
  bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
  void kvm_tdp_mmu_zap_all(struct kvm *kvm);
  void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux