This patch set mitigates another mmu_lock hold time issue. Although this is not enough and I'm thinking of additional work already, this alone can reduce the lock hold time to some extent. Takuya Yoshikawa (8): KVM: MMU: Fix and clean up for_each_gfn_* macros KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() KVM: MMU: Add a parameter to kvm_mmu_prepare_zap_page() to update the next position KVM: MMU: Introduce for_each_gfn_indirect_valid_sp_safe macro KVM: MMU: Delete hash_link node in kvm_mmu_prepare_zap_page() KVM: MMU: Introduce free_zapped_mmu_pages() for freeing mmu pages in a list KVM: MMU: Split out free_zapped_mmu_pages() from kvm_mmu_commit_zap_page() KVM: MMU: Move free_zapped_mmu_pages() out of the protection of mmu_lock arch/x86/kvm/mmu.c | 149 +++++++++++++++++++++++++++++++++++----------------- 1 files changed, 101 insertions(+), 48 deletions(-) -- 1.7.5.4 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html