On Fri, Mar 10, 2023 at 04:22:42PM -0800, Sean Christopherson wrote: ... > -static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, > - struct kvm_memory_slot *slot, > - struct kvm_page_track_notifier_node *node) > -{ > - kvm_mmu_zap_all_fast(kvm); > -} > - > int kvm_mmu_init_vm(struct kvm *kvm) > { > struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; > @@ -6110,7 +6103,6 @@ int kvm_mmu_init_vm(struct kvm *kvm) > } > > node->track_write = kvm_mmu_pte_write; > - node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; > kvm_page_track_register_notifier(kvm, node); > > kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache; > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index f706621c35b8..29dd6c97d145 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -12662,6 +12662,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm) > void kvm_arch_flush_shadow_memslot(struct kvm *kvm, > struct kvm_memory_slot *slot) > { > + kvm_mmu_zap_all_fast(kvm); Could we still call kvm_mmu_invalidate_zap_pages_in_memslot() here? As I know, for TDX, its version of kvm_mmu_invalidate_zap_pages_in_memslot() is like static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, struct kvm_page_track_notifier_node *node) { if (kvm_gfn_shared_mask(kvm)) kvm_mmu_zap_memslot(kvm, slot); else kvm_mmu_zap_all_fast(kvm); } Maybe this kind of judegment is better to be confined in mmu.c? Thanks Yan > + > kvm_page_track_flush_slot(kvm, slot); > } >