This patch set makes kvm_mmu_slot_remove_write_access() rmap based and adds conditional rescheduling to it. The motivation for this change is of course to reduce the mmu_lock hold time when we start dirty logging for a large memory slot. You may not see the problem if you just give 8GB or less of the memory to the guest with THP enabled on the host -- this is for the worst case. IMPORTANT NOTE (not about this patch set): I have hit the following bug many times with the current next branch, even WITHOUT my patches. Although I do not know a way to reproduce this yet, it seems that something was broken around slot->dirty_bitmap. I am now investigating the new code in __kvm_set_memory_region(). The bug: [ 575.238063] BUG: unable to handle kernel paging request at 00000002efe83a77 [ 575.238185] IP: [<ffffffffa05f9619>] mark_page_dirty_in_slot+0x19/0x20 [kvm] [ 575.238308] PGD 0 [ 575.238343] Oops: 0002 [#1] SMP The call trace: [ 575.241207] Call Trace: [ 575.241257] [<ffffffffa05f96b1>] kvm_write_guest_cached+0x91/0xb0 [kvm] [ 575.241370] [<ffffffffa0610db9>] kvm_arch_vcpu_ioctl_run+0x1109/0x12c0 [kvm] [ 575.241488] [<ffffffffa060fd55>] ? kvm_arch_vcpu_ioctl_run+0xa5/0x12c0 [kvm] [ 575.241595] [<ffffffff81679194>] ? mutex_lock_killable_nested+0x274/0x340 [ 575.241706] [<ffffffffa05faf80>] ? kvm_set_ioapic_irq+0x20/0x20 [kvm] [ 575.241813] [<ffffffffa05f71c9>] kvm_vcpu_ioctl+0x559/0x670 [kvm] [ 575.241913] [<ffffffffa05f8a58>] ? kvm_vm_ioctl+0x1b8/0x570 [kvm] [ 575.242007] [<ffffffff8101b9d3>] ? native_sched_clock+0x13/0x80 [ 575.242125] [<ffffffff8101ba49>] ? sched_clock+0x9/0x10 [ 575.242208] [<ffffffff8109015d>] ? sched_clock_cpu+0xbd/0x110 [ 575.242298] [<ffffffff811a914c>] ? fget_light+0x3c/0x140 [ 575.242381] [<ffffffff8119dfa8>] do_vfs_ioctl+0x98/0x570 [ 575.242463] [<ffffffff811a91b1>] ? fget_light+0xa1/0x140 [ 575.246393] [<ffffffff811a914c>] ? fget_light+0x3c/0x140 [ 575.250363] [<ffffffff8119e511>] sys_ioctl+0x91/0xb0 [ 575.254327] [<ffffffff81684c19>] system_call_fastpath+0x16/0x1b Takuya Yoshikawa (7): KVM: Write protect the updated slot only when we start dirty logging KVM: MMU: Remove unused parameter level from __rmap_write_protect() KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based KVM: x86: Remove unused slot_bitmap from kvm_mmu_page KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself KVM: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time Documentation/virtual/kvm/mmu.txt | 7 ---- arch/x86/include/asm/kvm_host.h | 5 --- arch/x86/kvm/mmu.c | 56 +++++++++++++++++++----------------- arch/x86/kvm/x86.c | 13 +++++--- virt/kvm/kvm_main.c | 1 - 5 files changed, 38 insertions(+), 44 deletions(-) -- 1.7.5.4 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html