Re: [PATCH v7 040/102] KVM: x86/mmu: Zap only leaf SPTEs for deleted/moved memslot for private mmu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2022-06-27 at 14:53 -0700, isaku.yamahata@xxxxxxxxx wrote:
> From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> 
> For kvm mmu that has shared bit mask, zap only leaf SPTEs when
> deleting/moving a memslot.  The existing kvm_mmu_zap_memslot() depends on

Unless I am mistaken, I don't see there's an 'existing' kvm_mmu_zap_memslot().

> role.invalid with read lock of mmu_lock so that other vcpu can operate on
> kvm mmu concurrently. 
> 

> Mark the root page table invalid, unlink it from page
> table pointer of CPU, process the page table.  
> 

Are you talking about the behaviour of existing code, or the change you are
going to make?  Looks like you mean the latter but I believe it's the former. 

> It doesn't work for private
> page table to unlink the root page table because it requires all SPTE entry
> to be non-present.  
> 

I don't think we can truly *unlink* the private root page table from secure
EPTP, right?  The EPTP (root table) is fixed (and hidden) during TD's runtime.

I guess you are trying to say: removing/unlinking one secure-EPT page requires
removing/unlinking all its children first? 

So the reason to only zap leaf is we cannot truly unlink the private root page
table, correct?  Sorry your changelog is not obvious to me.

> Instead, with write-lock of mmu_lock and zap only leaf
> SPTEs for kvm mmu with shared bit mask.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
> ---
>  arch/x86/kvm/mmu/mmu.c | 35 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 34 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 80d7c7709af3..c517c7bca105 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5854,11 +5854,44 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
>  	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
>  }
>  
> +static void kvm_mmu_zap_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
> +{
> +	bool flush = false;
> +
> +	write_lock(&kvm->mmu_lock);
> +
> +	/*
> +	 * Zapping non-leaf SPTEs, a.k.a. not-last SPTEs, isn't required, worst
> +	 * case scenario we'll have unused shadow pages lying around until they
> +	 * are recycled due to age or when the VM is destroyed.
> +	 */
> +	if (is_tdp_mmu_enabled(kvm)) {
> +		struct kvm_gfn_range range = {
> +		      .slot = slot,
> +		      .start = slot->base_gfn,
> +		      .end = slot->base_gfn + slot->npages,
> +		      .may_block = false,
> +		};
> +
> +		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, &range, flush);


It appears you only unmap private GFNs (because the base_gfn doesn't have shared
bit)?  I think shared mapping in this slot must be zapped too?  

How is this done?  Or the kvm_tdp_mmu_unmap_gfn_range() also zaps shared
mappings?

It's hard to review if one patch's behaviour/logic depends on further patches.

> +	} else {
> +		flush = slot_handle_level(kvm, slot, kvm_zap_rmapp, PG_LEVEL_4K,
> +					  KVM_MAX_HUGEPAGE_LEVEL, true);
> +	}
> +	if (flush)
> +		kvm_flush_remote_tlbs(kvm);
> +
> +	write_unlock(&kvm->mmu_lock);
> +}
> +
>  static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
>  			struct kvm_memory_slot *slot,
>  			struct kvm_page_track_notifier_node *node)
>  {
> -	kvm_mmu_zap_all_fast(kvm);
> +	if (kvm_gfn_shared_mask(kvm))
> +		kvm_mmu_zap_memslot(kvm, slot);
> +	else
> +		kvm_mmu_zap_all_fast(kvm);
>  }
>  
>  int kvm_mmu_init_vm(struct kvm *kvm)





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux