On Wed, Jan 23, 2013 at 07:12:31PM +0900, Takuya Yoshikawa wrote: > This patch set mitigates another mmu_lock hold time issue. Although > this is not enough and I'm thinking of additional work already, this > alone can reduce the lock hold time to some extent. > > Takuya Yoshikawa (8): > KVM: MMU: Fix and clean up for_each_gfn_* macros > KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() > KVM: MMU: Add a parameter to kvm_mmu_prepare_zap_page() to update the next position > KVM: MMU: Introduce for_each_gfn_indirect_valid_sp_safe macro > KVM: MMU: Delete hash_link node in kvm_mmu_prepare_zap_page() > KVM: MMU: Introduce free_zapped_mmu_pages() for freeing mmu pages in a list > KVM: MMU: Split out free_zapped_mmu_pages() from kvm_mmu_commit_zap_page() > KVM: MMU: Move free_zapped_mmu_pages() out of the protection of mmu_lock > > arch/x86/kvm/mmu.c | 149 +++++++++++++++++++++++++++++++++++----------------- > 1 files changed, 101 insertions(+), 48 deletions(-) Need a limit on the number of pages whose freeing is delayed. See that n_used_mmu_pages is used by both SLAB freeing (to know how much pressure to apply) and allocators (to decide when to allocate more). You allow n_used_mmu_pages to be inaccurate, which is fine as long as the error is limited. Perhaps have a max of 64 pages at invalid_pages per round and if exceeded release memory inside mmu_lock (one-by-one) ? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html