Re: [PATCH 2/2] KVM: x86/mmu: Drop 'shared' param from tdp_mmu_link_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 10, 2021 at 3:46 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> Drop @shared from tdp_mmu_link_page() and hardcode it to work for
> mmu_lock being held for read.  The helper has exactly one caller and
> in all likelihood will only ever have exactly one caller.  Even if KVM
> adds a path to install translations without an initiating page fault,
> odds are very, very good that the path will just be a wrapper to the
> "page fault" handler (both SNP and TDX RFCs propose patches to do
> exactly that).
>
> No functional change intended.
>
> Cc: Ben Gardon <bgardon@xxxxxxxxxx>
> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>

Reviewed-by: Ben Gardon <bgardon@xxxxxxxxxx>

Nice cleanup!

> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 17 ++++-------------
>  1 file changed, 4 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index d99e064d366f..c5b901744d15 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -257,26 +257,17 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
>   *
>   * @kvm: kvm instance
>   * @sp: the new page
> - * @shared: This operation may not be running under the exclusive use of
> - *         the MMU lock and the operation must synchronize with other
> - *         threads that might be adding or removing pages.
>   * @account_nx: This page replaces a NX large page and should be marked for
>   *             eventual reclaim.
>   */
>  static void tdp_mmu_link_page(struct kvm *kvm, struct kvm_mmu_page *sp,
> -                             bool shared, bool account_nx)
> +                             bool account_nx)
>  {
> -       if (shared)
> -               spin_lock(&kvm->arch.tdp_mmu_pages_lock);
> -       else
> -               lockdep_assert_held_write(&kvm->mmu_lock);
> -
> +       spin_lock(&kvm->arch.tdp_mmu_pages_lock);
>         list_add(&sp->link, &kvm->arch.tdp_mmu_pages);
>         if (account_nx)
>                 account_huge_nx_page(kvm, sp);
> -
> -       if (shared)
> -               spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
> +       spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>  }
>
>  /**
> @@ -1062,7 +1053,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>                                                      !shadow_accessed_mask);
>
>                         if (tdp_mmu_set_spte_atomic_no_dirty_log(vcpu->kvm, &iter, new_spte)) {
> -                               tdp_mmu_link_page(vcpu->kvm, sp, true,
> +                               tdp_mmu_link_page(vcpu->kvm, sp,
>                                                   huge_page_disallowed &&
>                                                   req_level >= iter.level);
>
> --
> 2.32.0.605.g8dce9f2422-goog
>



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux