Re: [PATCH v3 5/8] KVM: x86/mmu: Set disallowed_nx_huge_page in TDP MMU before setting SPTE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/9/22 16:44, Sean Christopherson wrote:
On Tue, Aug 09, 2022, Paolo Bonzini wrote:
On 8/9/22 05:26, Yan Zhao wrote:
hi Sean,

I understand this smp_rmb() is intended to prevent the reading of
p->nx_huge_page_disallowed from happening before it's set to true in
kvm_tdp_mmu_map(). Is this understanding right?

If it's true, then do we also need the smp_rmb() for read of sp->gfn in
handle_removed_pt()? (or maybe for other fields in sp in other places?)

No, in that case the barrier is provided by rcu_dereference().  In fact, I
am not sure the barriers are needed in this patch either (but the comments
are :)):

Yeah, I'm 99% certain the barriers aren't strictly required, but I didn't love the
idea of depending on other implementation details for the barriers.  Of course I
completely overlooked the fact that all other sp fields would need the same
barriers...

- the write barrier is certainly not needed because it is implicit in
tdp_mmu_set_spte_atomic's cmpxchg64

- the read barrier _should_ also be provided by rcu_dereference(pt), but I'm
not 100% sure about that. The reasoning is that you have

(1)	iter->old spte = READ_ONCE(*rcu_dereference(iter->sptep));
	...
(2)	tdp_ptep_t pt = spte_to_child_pt(old_spte, level);
(3)	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));
	...
(4)	if (sp->nx_huge_page_disallowed) {

and (4) is definitely ordered after (1) thanks to the READ_ONCE hidden
within (3) and the data dependency from old_spte to sp.

Yes, I think that's correct.  Callers must verify the SPTE is present before getting
the associated child shadow page.  KVM does have instances where a shadow page is
retrieved from the SPTE _pointer_, but that's the parent shadow page, i.e. isn't
guarded by the SPTE being present.

	struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep));

Something like this is as a separate patch?

Would you resubmit without the memory barriers then?

diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index f0af385c56e0..9d982ccf4567 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -13,6 +13,12 @@
   * to be zapped while holding mmu_lock for read, and to allow TLB flushes to be
   * batched without having to collect the list of zapped SPs.  Flows that can
   * remove SPs must service pending TLB flushes prior to dropping RCU protection.
+ *
+ * The READ_ONCE() ensures that, if the SPTE points at a child shadow page, all
+ * fields in struct kvm_mmu_page will be read after the caller observes the
+ * present SPTE (KVM must check that the SPTE is present before following the
+ * SPTE's pfn to its associated shadow page).  Pairs with the implicit memory

I guess you mean both the shadow page table itself and the struct kvm_mmu_page? Or do you think to_shadow_page() should have a smp_rmb()?

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux