Re: [PATCH] KVM: x86/mmu: fix KVM_X86_QUIRK_SLOT_ZAP_ALL for shadow MMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 09, 2024, Yan Zhao wrote:
> On Fri, Oct 04, 2024 at 09:56:07AM -0700, Sean Christopherson wrote:
> > >  arch/x86/kvm/mmu/mmu.c | 60 ++++++++++++++++++++++++++++++++----------
> > >  1 file changed, 46 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index e081f785fb23..912bad4fa88c 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -1884,10 +1884,14 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp)
> > >  		if (is_obsolete_sp((_kvm), (_sp))) {			\
> > >  		} else
> > >  
> > > -#define for_each_gfn_valid_sp_with_gptes(_kvm, _sp, _gfn)		\
> > > +#define for_each_gfn_valid_sp(_kvm, _sp, _gfn)				\
> > >  	for_each_valid_sp(_kvm, _sp,					\
> > >  	  &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)])	\
> > > -		if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else
> > > +		if ((_sp)->gfn != (_gfn)) {} else
> > 
> > I don't think we should provide this iterator, because it won't do what most people
> > would it expect it to do.  Specifically, the "round gfn for level" adjustment that
> > is done for direct SPs means that the exact gfn comparison will not get a match,
> > even when a SP does "cover" a gfn, or was even created specifically for a gfn.
> Right, zapping of sps with no gptes are not necessary.
> When role.direct is true, the sp->gfn can even be a non-slot gfn with the leaf
> entries being mmio sptes. So, it should be ok to ignore
> "!sp_has_gptes(_sp) && (_sp)->gfn == (_gfn)".
> 
> Tests of "normal VM + nested VM + 3 selftests" passed on the 3 configs
> 1) modprobe kvm_intel ept=0,
> 2) modprobe kvm tdp_mmu=0
>    modprobe kvm_intel ept=1
> 3) modprobe kvm tdp_mmu=1
>    modprobe kvm_intel ept=1
> 
> with quirk disabled + below change
> 
> @@ -7071,7 +7077,7 @@ static void kvm_mmu_zap_memslot_pages_and_flush(struct kvm *kvm,
>                 struct kvm_mmu_page *sp;
>                 gfn_t gfn = slot->base_gfn + i;
> 
> -               for_each_gfn_valid_sp(kvm, sp, gfn)
> +               for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn)
>                         kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
> 
>                 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {

Ya, I have a patch that I'll send today.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux