Re: [PATCH v1 04/13] KVM: x86/mmu: Factor out logic to atomically install a new page table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 13, 2021, David Matlack wrote:
> Factor out the logic to atomically replace an SPTE with an SPTE that
> points to a new page table. This will be used in a follow-up commit to
> split a large page SPTE into one level lower.
> 
> Opportunistically drop the kvm_mmu_get_page tracepoint in
> kvm_tdp_mmu_map() since it is redundant with the identical tracepoint in
> alloc_tdp_mmu_page().
> 
> Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 48 +++++++++++++++++++++++++++-----------
>  1 file changed, 34 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 656ebf5b20dc..dbd07c10d11a 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -950,6 +950,36 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>  	return ret;
>  }
>  
> +/*
> + * tdp_mmu_install_sp_atomic - Atomically replace the given spte with an
> + * spte pointing to the provided page table.
> + *
> + * @kvm: kvm instance
> + * @iter: a tdp_iter instance currently on the SPTE that should be set
> + * @sp: The new TDP page table to install.
> + * @account_nx: True if this page table is being installed to split a
> + *              non-executable huge page.
> + *
> + * Returns: True if the new page table was installed. False if spte being
> + *          replaced changed, causing the atomic compare-exchange to fail.

I'd prefer to return an int with 0/-EBUSY on success/fail.  Ditto for the existing
tdp_mmu_set_spte_atomic().  Actually, if you add a prep patch to make that happen,
then this can be:

	u64 spte = make_nonleaf_spte(sp->spt, !shadow_accessed_mask);
	int ret;

	ret = tdp_mmu_set_spte_atomic(kvm, iter, spte);
	if (ret)
		return ret;

	tdp_mmu_link_page(kvm, sp, account_nx);
	return 0;



> + *          If this function returns false the sp will be freed before
> + *          returning.

Uh, no it's not?  The call to tdp_mmu_free_sp() is still done by kvm_tdp_mmu_map().

> + */
> +static bool tdp_mmu_install_sp_atomic(struct kvm *kvm,

Hmm, so this helper is the only user of tdp_mmu_link_page(), and _that_ helper
is rather tiny.  And this would also be a good opportunity to clean up the
"(un)link_page" verbiage, as the bare "page" doesn't communicate to the reader
that it's for linking shadow pages, e.g. not struct page.

So, what about folding in tdp_mmu_link_page(), naming this helper either
tdp_mmu_link_sp_atomic() or tdp_mmu_link_shadow_page_atomic(), and then renaming
tdp_mmu_unlink_page() accordingly?  And for bonus points, add a blurb in the
function comment like:

	* Note the lack of a non-atomic variant!  The TDP MMU always builds its
	* page tables while holding mmu_lock for read.

> +				      struct tdp_iter *iter,
> +				      struct kvm_mmu_page *sp,
> +				      bool account_nx)
> +{
> +	u64 spte = make_nonleaf_spte(sp->spt, !shadow_accessed_mask);
> +
> +	if (!tdp_mmu_set_spte_atomic(kvm, iter, spte))
> +		return false;
> +
> +	tdp_mmu_link_page(kvm, sp, account_nx);
> +
> +	return true;
> +}
> +
>  /*
>   * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing
>   * page tables and SPTEs to translate the faulting guest physical address.
> @@ -959,8 +989,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  	struct kvm_mmu *mmu = vcpu->arch.mmu;
>  	struct tdp_iter iter;
>  	struct kvm_mmu_page *sp;
> -	u64 *child_pt;
> -	u64 new_spte;
>  	int ret;
>  
>  	kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -996,6 +1024,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  		}
>  
>  		if (!is_shadow_present_pte(iter.old_spte)) {
> +			bool account_nx = fault->huge_page_disallowed &&
> +					  fault->req_level >= iter.level;
> +
>  			/*
>  			 * If SPTE has been frozen by another thread, just
>  			 * give up and retry, avoiding unnecessary page table
> @@ -1005,18 +1036,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  				break;
>  
>  			sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
> -			child_pt = sp->spt;
> -
> -			new_spte = make_nonleaf_spte(child_pt,
> -						     !shadow_accessed_mask);
> -
> -			if (tdp_mmu_set_spte_atomic(vcpu->kvm, &iter, new_spte)) {
> -				tdp_mmu_link_page(vcpu->kvm, sp,
> -						  fault->huge_page_disallowed &&
> -						  fault->req_level >= iter.level);
> -
> -				trace_kvm_mmu_get_page(sp, true);
> -			} else {
> +			if (!tdp_mmu_install_sp_atomic(vcpu->kvm, &iter, sp, account_nx)) {
>  				tdp_mmu_free_sp(sp);
>  				break;
>  			}
> -- 
> 2.34.1.173.g76aa8bc2d0-goog
> 



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux