On Fri, Feb 11, 2022, Sean Christopherson wrote: > On Fri, Feb 11, 2022, Paolo Bonzini wrote: > > On 2/11/22 01:54, Sean Christopherson wrote: > > > > > @@ -3242,8 +3245,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, > > > > > &invalid_list); > > > > > if (free_active_root) { > > > > > - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && > > > > > - (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) { > > > > > + if (to_shadow_page(mmu->root.hpa)) { > > > > > mmu_free_root_page(kvm, &mmu->root.hpa, &invalid_list); > > > > > } else if (mmu->pae_root) { > > > > > > Gah, this is technically wrong. It shouldn't truly matter, but it's wrong. root.hpa > > > will not be backed by shadow page if the root is pml4_root or pml5_root, in which > > > case freeing the PAE root is wrong. They should obviously be invalid already, but > > > it's a little confusing because KVM wanders down a path that may not be relevant > > > to the current mode. > > > > pml4_root and pml5_root are dummy, and the first "real" level of page tables > > is stored in pae_root for that case too, so I think that should DTRT. > > Ugh, completely forgot that detail. You're correct. Probably worth a comment? Actually, can't this be if (to_shadow_page(mmu->root.hpa)) { ... else if (!WARN_ON(!mmu->pae_root)) { ... } now that it's wrapped with VALID_PAGE(root.hpa)?