Re: [PATCH v2 07/20] kvm: x86/mmu: Support zapping SPTEs in the TDP MMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 19, 2020 at 1:50 PM Edgecombe, Rick P
<rick.p.edgecombe@xxxxxxxxx> wrote:
>
> On Wed, 2020-10-14 at 11:26 -0700, Ben Gardon wrote:
> > @@ -5827,6 +5831,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t
> > gfn_start, gfn_t gfn_end)
> >         struct kvm_memslots *slots;
> >         struct kvm_memory_slot *memslot;
> >         int i;
> > +       bool flush;
> >
> >         spin_lock(&kvm->mmu_lock);
> >         for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> > @@ -5846,6 +5851,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t
> > gfn_start, gfn_t gfn_end)
> >                 }
> >         }
> >
> > +       if (kvm->arch.tdp_mmu_enabled) {
> > +               flush = kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start,
> > gfn_end);
> > +               if (flush)
> > +                       kvm_flush_remote_tlbs(kvm);
> > +       }
> > +
> >         spin_unlock(&kvm->mmu_lock);
> >  }
>
> Hi,
>
> I'm just going through this looking at how I might integrate some other
> MMU changes I had been working on. But as long as I am, I'll toss out
> an extremely small comment that the "flush" bool seems unnecessary.

I agree this could easily be replaced with:
if (kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start, gfn_end))
        kvm_flush_remote_tlbs(kvm);

I like the flush variable just because I think it gives a little more
explanation to the code, but I agree both are perfectly good.

>
> I'm also wondering a bit about this function in general. It seems that
> this change adds an extra flush in the nested case, but this operation
> already flushed for each memslot in order to facilitate the spin break.
> If slot_handle_level_range() took some extra parameters it could maybe
> be avoided. Not sure if it's worth it.

I agree, there's a lot of room for optimization here to reduce the
number of TLB flushes. In this series I have not been too concerned
about optimizing performance. I wanted it to be easy to review and to
minimize the number of bugs in the code.

Future patch series will optimize the TDP MMU and make it actually
performant. Two specific changes I have planned to reduce the number
of TLB flushes are 1.) a deferred TLB flush scheme using the existing
vm-global tlbs_dirty count and 2.) a system for skipping the "legacy
MMU" handlers for various operations if the TDP MMU is enabled and the
"legacy MMU" has not been used on that VM. I believe both of these are
present in the original RFC I sent out a year ago if you're
interested. I'll CC you on those future optimizations.

>
> Rick



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux