When yielding in the TDP MMU iterator, service any pending TLB flush before dropping RCU protections in anticipation of using the caller's RCU "lock" as a proxy for vCPUs in the guest. Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Reviewed-by: Ben Gardon <bgardon@xxxxxxxxxx> --- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 42803d5dbbf9..ef594af246f5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -730,11 +730,11 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, return false; if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { - rcu_read_unlock(); - if (flush) kvm_flush_remote_tlbs(kvm); + rcu_read_unlock(); + if (shared) cond_resched_rwlock_read(&kvm->mmu_lock); else -- 2.35.1.574.g5d30c73bfb-goog