Call cond_resched_lock() when zapping MMIO to reschedule if needed or to release and reacquire mmu_lock in case of contention. There is no need to flush or zap when temporarily dropping mmu_lock as zapping MMIO sptes is done when holding the memslots lock and with the "update in-progress" bit set in the memslots generation, which disables MMIO spte caching. The walk does need to be restarted if mmu_lock is dropped as the active pages list may be modified. Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> --- arch/x86/kvm/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 2411083187a4..3c093fdb210d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5951,7 +5951,8 @@ static void kvm_mmu_zap_mmio_sptes(struct kvm *kvm) list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (!sp->mmio_cached) continue; - if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list)) + if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) || + cond_resched_lock(&kvm->mmu_lock)) goto restart; } -- 2.20.1