On Fri, Dec 13, 2024 at 02:57:10PM -0500, Paolo Bonzini wrote: > From: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx> > > Don't zap valid mirror roots in kvm_tdp_mmu_zap_all(), which in effect > is only direct roots (invalid and valid). > > For TDX, kvm_tdp_mmu_zap_all() is only called during MMU notifier > release. Since, mirrored EPT comes from guest mem, it will never be > mapped to userspace, and won't apply. But in addition to be unnecessary, > mirrored EPT is cleaned up in a special way during VM destruction. > > Pass the KVM_INVALID_ROOTS bit into __for_each_tdp_mmu_root_yield_safe() > as well, to clean up invalid direct roots, as is the current behavior. > > Co-developed-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx> > Message-ID: <20240718211230.1492011-18-rick.p.edgecombe@xxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 18 +++++++++++------- > 1 file changed, 11 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 94c464ce1d12..e08c60834d0c 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -999,19 +999,23 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) > struct kvm_mmu_page *root; > > /* > - * Zap all roots, including invalid roots, as all SPTEs must be dropped > - * before returning to the caller. Zap directly even if the root is > - * also being zapped by a worker. Walking zapped top-level SPTEs isn't > - * all that expensive and mmu_lock is already held, which means the > - * worker has yielded, i.e. flushing the work instead of zapping here > - * isn't guaranteed to be any faster. > + * Zap all roots, except valid mirror roots, as all direct SPTEs must As now specifying "KVM_DIRECT_ROOTS | KVM_INVALID_ROOTS" only zaps valid direct roots + invalid direct roots, do we need to update the comment? e.g. to * Zap all direct roots, including invalid direct roots, as all direct * SPTEs must be dropped before returning to the caller. For TDX, mirror * roots don't need handling in response to the mmu notifier (the caller). > + * be dropped before returning to the caller. For TDX, mirror roots > + * don't need handling in response to the mmu notifier (the caller) and > + * they also won't be invalid until the VM is being torn down. > + * > + * Zap directly even if the root is also being zapped by a worker. > + * Walking zapped top-level SPTEs isn't all that expensive and mmu_lock > + * is already held, which means the worker has yielded, i.e. flushing > + * the work instead of zapping here isn't guaranteed to be any faster. > * > * A TLB flush is unnecessary, KVM zaps everything if and only the VM > * is being destroyed or the userspace VMM has exited. In both cases, > * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. > */ > lockdep_assert_held_write(&kvm->mmu_lock); > - for_each_tdp_mmu_root_yield_safe(kvm, root) > + __for_each_tdp_mmu_root_yield_safe(kvm, root, -1, > + KVM_DIRECT_ROOTS | KVM_INVALID_ROOTS) > tdp_mmu_zap_root(kvm, root, false); > } > > -- > 2.43.5 > >