On Thu, Mar 03, 2022, Sean Christopherson wrote: > On Thu, Mar 03, 2022, Mingwei Zhang wrote: > > On Thu, Mar 03, 2022, Sean Christopherson wrote: > > > On Wed, Mar 02, 2022, Mingwei Zhang wrote: > > > > On Sat, Feb 26, 2022, Sean Christopherson wrote: > > > > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > > > > index 12866113fb4f..e35bd88d92fd 100644 > > > > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > > > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > > > > @@ -93,7 +93,15 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, > > > > > list_del_rcu(&root->link); > > > > > spin_unlock(&kvm->arch.tdp_mmu_pages_lock); > > > > > > > > > > - zap_gfn_range(kvm, root, 0, -1ull, false, false, shared); > > > > > + /* > > > > > + * A TLB flush is not necessary as KVM performs a local TLB flush when > > > > > + * allocating a new root (see kvm_mmu_load()), and when migrating vCPU > > > > > + * to a different pCPU. Note, the local TLB flush on reuse also > > > > > + * invalidates any paging-structure-cache entries, i.e. TLB entries for > > > > > + * intermediate paging structures, that may be zapped, as such entries > > > > > + * are associated with the ASID on both VMX and SVM. > > > > > + */ > > > > > + (void)zap_gfn_range(kvm, root, 0, -1ull, false, false, shared); > > > > > > > > Understood that we could avoid the TLB flush here. Just curious why the > > > > "(void)" is needed here? Is it for compile time reason? > > > > > > Nope, no functional purpose, though there might be some "advanced" warning or > > > static checkers that care. > > > > > > The "(void)" is to communicate to human readers that the result is intentionally > > > ignored, e.g. to reduce the probability of someone "fixing" the code by acting on > > > the result of zap_gfn_range(). The comment should suffice, but it's nice to have > > > the code be self-documenting as much as possible. > > > > Right, I got the point. Thanks. > > > > Coming back. It seems that I pretended to understand that we should > > avoid the TLB flush without really knowing why. > > > > I mean, leaving (part of the) stale TLB entries unflushed will still be > > dangerous right? Or am I missing something that guarantees to flush the > > local TLB before returning to the guest? For instance, > > kvm_mmu_{re,}load()? > > Heh, if SVM's ASID management wasn't a mess[*], it'd be totally fine. The idea, > and what EPT architectures mandates, is that each TDP root is associated with an > ASID. So even though there may be stale entries in the TLB for a root, because > that root is no longer used those stale entries are unreachable. And if KVM ever > happens to reallocate the same physical page for a root, that's ok because KVM must > be paranoid and flush that root (see code comment in this patch). > > What we're missing on SVM is proper ASID handling. If KVM uses ASIDs the way AMD > intends them to be used, then this works as intended because each root is again > associated with a specific ASID, and KVM just needs to flush when (re)allocating > a root and when reusing an ASID (which it already handles). > > [*] https://lore.kernel.org/all/Yh%2FJdHphCLOm4evG@xxxxxxxxxx Oh, putting AMD issues aside for now. I think I might be too narrow down to the zapping logic previously. So, I originally think anytime we want to zap, we have to do the following things in strict order: 1) zap SPTEs. 2) flush TLBs. 3) flush cache (AMD SEV only). 4) deallocate shadow pages. However, if you have already invalidated EPTP (pgd ptr), then step 2) becomes optional, since those stale TLBs are no longer useable by the guest due to the change of ASID. Am I understanding the point correctly? So, for all invalidated roots, the assumption is that we have already called "kvm_reload_rmote_mmus()", which basically update the ASID.