Pre-check for an mmu_notifier retry on x86 to avoid contending mmu_lock, which is quite problematic on preemptible kernels due to the way x86's TDP MMU reacts to mmu_lock contentions. If mmu_lock contention is detected when zapping SPTEs for an mmu_notifier invalidation, the TDP MMU drops mmu_lock and yields. The idea behind yielding is to let vCPUs that are trying to fault-in memory make forward progress while the invalidation is ongoing. This works because x86 uses the precise(ish) version of retry which checks for hva overlap. At least, it works so long as vCPUs are hitting the region that's being zapped. Yielding turns out to be really bad when the vCPU is trying to fault-in a page that *is* covered by the invalidation, because the vCPU ends up retrying over and over, which puts mmu_lock under constant contention, and ultimately causes the invalidation to take much longer due to the zapping task constantly yielding. And in the worst case scenario, if the invalidation is finding SPTEs to zap, every yield will trigger a remote (*cough* VM-wide) TLB flush. Sean Christopherson (2): KVM: Allow calling mmu_invalidate_retry_hva() without holding mmu_lock KVM: x86/mmu: Retry fault before acquiring mmu_lock if mapping is changing arch/x86/kvm/mmu/mmu.c | 3 +++ include/linux/kvm_host.h | 17 ++++++++++++++--- 2 files changed, 17 insertions(+), 3 deletions(-) base-commit: fff2e47e6c3b8050ca26656693caa857e3a8b740 -- 2.42.0.rc2.253.gd59a3bf2b4-goog