kvm_tdp_mmu_zap_all is intended to visit all roots and zap their page tables, which flushes the accessed and dirty bits out to the Linux "struct page"s. Missing some of the roots has catastrophic effects, because kvm_tdp_mmu_zap_all is called when the MMU notifier is being removed and any PTEs left behind might become dangling by the time kvm-arch_destroy_vm tears down the roots for good. Unfortunately that is exactly what kvm_tdp_mmu_zap_all is doing: it visits all roots via for_each_tdp_mmu_root_yield_safe, which in turn uses kvm_tdp_mmu_get_root to skip invalid roots. If the current root is invalid at the time of kvm_tdp_mmu_zap_all, its page tables will remain in place but will later be zapped during kvm_arch_destroy_vm. To fix this, ensure that kvm_tdp_mmu_zap_all goes over all roots, including the invalid ones. The easiest way to do so is for kvm_tdp_mmu_zap_all to do the same as kvm_mmu_zap_all_fast: invalidate all roots, and then zap the invalid roots. The only difference is that there is no need to go through tdp_mmu_zap_spte_atomic. Paolo Paolo Bonzini (2): KVM: x86: allow kvm_tdp_mmu_zap_invalidated_roots with write-locked mmu_lock KVM: x86: zap invalid roots in kvm_tdp_mmu_zap_all arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 42 ++++++++++++++++++++------------------ arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 3 files changed, 24 insertions(+), 22 deletions(-) -- 2.31.1