Re: [PATCH v2 4/6] KVM: x86/mmu: fast_page_fault support for the TDP MMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 12, 2021 at 09:03:11PM +0000, Sean Christopherson wrote:
> On Mon, Jul 12, 2021, Ben Gardon wrote:
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > > index c6fa8d00bf9f..2c9e0ed71fa0 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > > @@ -527,6 +527,10 @@ static inline bool tdp_mmu_set_spte_atomic_no_dirty_log(struct kvm *kvm,
> > >         if (is_removed_spte(iter->old_spte))
> > >                 return false;
> > >
> > > +       /*
> > > +        * TDP MMU sptes can also be concurrently cmpxchg'd in
> > > +        * fast_pf_fix_direct_spte as part of fast_page_fault.
> > > +        */
> 
> The cmpxchg64 part isn't what's interesting, it's just the means to the end.
> Maybe reword slightly to focus on modifying SPTEs without holding mmu_lock, e.g.
> 
> 	/*
> 	 * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs outside
> 	 * of mmu_lock.
> 	 */

Good point about cmpxchg. I'll use your comment in v3.

> 
> > >         if (cmpxchg64(rcu_dereference(iter->sptep), iter->old_spte,
> > >                       new_spte) != iter->old_spte)
> > >                 return false;
> > 
> > I'm a little nervous about not going through the handle_changed_spte
> > flow for the TDP MMU, but as things are now, I think it's safe.
> 
> Ya, it would be nice to flow through the TDP MMU proper as we could also "restore"
> __rcu.  That said, the fast #PF fix flow is unique and specific enough that I don't
> think it's worth going out of our way to force the issue.
> 
> > > @@ -1546,3 +1550,35 @@ int kvm_tdp_mmu_get_walk_lockless(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
> > >
> > >         return leaf;
> > >  }
> > > +
> > > +/*
> > > + * Must be called between kvm_tdp_mmu_walk_shadow_page_lockless_{begin,end}.
> > > + *
> > > + * The returned sptep must not be used after
> > > + * kvm_tdp_mmu_walk_shadow_page_lockless_end.
> > > + */
> > > +u64 *kvm_tdp_mmu_get_last_sptep_lockless(struct kvm_vcpu *vcpu, u64 addr,
> > > +                                        u64 *spte)
> > > +{
> > > +       struct tdp_iter iter;
> > > +       struct kvm_mmu *mmu = vcpu->arch.mmu;
> > > +       gfn_t gfn = addr >> PAGE_SHIFT;
> > > +       tdp_ptep_t sptep = NULL;
> > > +
> > > +       tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {
> > > +               *spte = iter.old_spte;
> > > +               sptep = iter.sptep;
> > > +       }
> > > +
> > > +       if (sptep)
> 
> This check is unnecessary, even when using rcu_dereference.

Ack. Will fix.

> 
> > > +               /*
> > > +                * Perform the rcu dereference here since we are passing the
> > > +                * sptep up to the generic MMU code which does not know the
> > > +                * synchronization details of the TDP MMU. This is safe as long
> > > +                * as the caller obeys the contract that the sptep is not used
> > > +                * after kvm_tdp_mmu_walk_shadow_page_lockless_end.
> > > +                */
> > 
> > There's a little more to this contract:
> > 1. The caller should only modify the SPTE using an atomic cmpxchg with
> > the returned spte value.
> > 2. The caller should not modify the mapped PFN or present <-> not
> > present state of the SPTE.
> > 3. There are other bits the caller can't modify too. (lpage, mt, etc.)
> > 
> > If the comments on this function don't document all the constraints on
> > how the returned sptep can be used, it might be safer to specify that
> > this is only meant to be used as part of the fast page fault handler.
> 
> Or maybe a less specific, but more scary comment?
> 
> > 
> > > +               return rcu_dereference(sptep);
> 
> I still vote to use "(__force u64 *)" instead of rcu_dereference() to make it
> clear we're cheating in order to share code with the legacy MMU.

Some downsides I see of using __force is:

 - The implementation of rcu_dereference() is non-trivial. I'm not sure
   how much of it we have to re-implement here. For example, should we
   us READ_ONCE() in addition to the type cast?

 - rcu_dereference() checks if the rcu read lock is held and also calls
   rcu_check_sparse, which seem like useful debugging checks we'd miss
   out on.

I think a big comment should be sufficient to draw the readers eyes and
explain [the extent to which :)] we are cheating.

> 
> 	/*
> 	 * Squash the __rcu annotation, the legacy MMU doesn't rely on RCU to
> 	 * protect its page tables and so the common MMU code doesn't preserve
> 	 * the annotation.
> 	 *
> 	 * It goes without saying, but the caller must honor all TDP MMU
> 	 * contracts for accessing/modifying SPTEs outside of mmu_lock.
> 	 */
> 	return (__force u64 *)sptep;
> 	
> > > +       return NULL;
> > > +}
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> > > index e9dde5f9c0ef..508a23bdf7da 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.h
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> > > @@ -81,6 +81,8 @@ void kvm_tdp_mmu_walk_lockless_begin(void);
> > >  void kvm_tdp_mmu_walk_lockless_end(void);
> > >  int kvm_tdp_mmu_get_walk_lockless(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
> > >                                   int *root_level);
> > > +u64 *kvm_tdp_mmu_get_last_sptep_lockless(struct kvm_vcpu *vcpu, u64 addr,
> > > +                                        u64 *spte);
> > >
> > >  #ifdef CONFIG_X86_64
> > >  bool kvm_mmu_init_tdp_mmu(struct kvm *kvm);
> > > --
> > > 2.32.0.93.g670b81a890-goog
> > >



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux