From: Jérôme Glisse <jglisse@xxxxxxxxxx> Since changes to mmu notifier the change_pte() optimization was lost for kvm. This re-enable it, when ever a pte is going from read and write to read only with same pfn, or from read only to read and write with different pfn. It is safe to update the secondary MMUs, because the primary MMU pte invalidate must have already happened with a ptep_clear_flush() before set_pte_at_notify() is invoked (and thus before change_pte() callback). Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> Cc: kvm@xxxxxxxxxxxxxxx --- virt/kvm/kvm_main.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5ecea812cb6a..fec155c2d7b8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -369,6 +369,14 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, int need_tlb_flush = 0, idx; int ret; + /* + * Nothing to do when MMU_NOTIFIER_USE_CHANGE_PTE is set as it means + * that change_pte() will be call and it is a situation in which we + * allow to only rely on change_pte(). + */ + if (range->event & MMU_NOTIFIER_USE_CHANGE_PTE) + return 0; + idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); /* @@ -398,6 +406,14 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, { struct kvm *kvm = mmu_notifier_to_kvm(mn); + /* + * Nothing to do when MMU_NOTIFIER_USE_CHANGE_PTE is set as it means + * that change_pte() will be call and it is a situation in which we + * allow to only rely on change_pte(). + */ + if (range->event & MMU_NOTIFIER_USE_CHANGE_PTE) + return; + spin_lock(&kvm->mmu_lock); /* * This sequence increase will notify the kvm page fault that -- 2.17.1