When yielding in the TDP MMU iterator, service any pending TLB flush before dropping RCU protections in anticipation of using the callers RCU "lock" as a proxy for vCPUs in the guest. Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 79a52717916c..55c16680b927 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -732,11 +732,11 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, return false; if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { - rcu_read_unlock(); - if (flush) kvm_flush_remote_tlbs(kvm); + rcu_read_unlock(); + if (shared) cond_resched_rwlock_read(&kvm->mmu_lock); else -- 2.34.0.rc2.393.gf8c9666880-goog