Re: [PATCH v2 07/20] kvm: x86/mmu: Support zapping SPTEs in the TDP MMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2020-10-14 at 11:26 -0700, Ben Gardon wrote:
> @@ -5827,6 +5831,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t
> gfn_start, gfn_t gfn_end)
>         struct kvm_memslots *slots;
>         struct kvm_memory_slot *memslot;
>         int i;
> +       bool flush;
>  
>         spin_lock(&kvm->mmu_lock);
>         for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> @@ -5846,6 +5851,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t
> gfn_start, gfn_t gfn_end)
>                 }
>         }
>  
> +       if (kvm->arch.tdp_mmu_enabled) {
> +               flush = kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start,
> gfn_end);
> +               if (flush)
> +                       kvm_flush_remote_tlbs(kvm);
> +       }
> +
>         spin_unlock(&kvm->mmu_lock);
>  }

Hi,

I'm just going through this looking at how I might integrate some other
MMU changes I had been working on. But as long as I am, I'll toss out
an extremely small comment that the "flush" bool seems unnecessary.

I'm also wondering a bit about this function in general. It seems that
this change adds an extra flush in the nested case, but this operation
already flushed for each memslot in order to facilitate the spin break.
If slot_handle_level_range() took some extra parameters it could maybe
be avoided. Not sure if it's worth it.

Rick




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux