On Thu, Mar 03, 2022, Paolo Bonzini wrote: > From: Sean Christopherson <seanjc@xxxxxxxxxx> > > When yielding in the TDP MMU iterator, service any pending TLB flush > before dropping RCU protections in anticipation of using the caller's RCU > "lock" as a proxy for vCPUs in the guest. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > Reviewed-by: Ben Gardon <bgardon@xxxxxxxxxx> > Message-Id: <20220226001546.360188-19-seanjc@xxxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> Reviewed-by: Mingwei Zhang <mizhang@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index c71debdbc732..3a866fcb5ea9 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -716,11 +716,11 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, > return false; > > if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { > - rcu_read_unlock(); > - > if (flush) > kvm_flush_remote_tlbs(kvm); > > + rcu_read_unlock(); > + > if (shared) > cond_resched_rwlock_read(&kvm->mmu_lock); > else > -- > 2.31.1 > >