Re: [PATCH] KVM: Move gfn_to_pfn_cache invalidation to invalidate_range_end hook

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2024-08-06 at 07:03 -0700, Sean Christopherson wrote:
> On Tue, Aug 06, 2024, David Woodhouse wrote:
> > On Mon, 2024-08-05 at 17:45 -0700, Sean Christopherson wrote:
> > > On Mon, Aug 05, 2024, David Woodhouse wrote:
> > > > From: David Woodhouse <dwmw@xxxxxxxxxxxx>
> > > Servicing guest pages faults has the same problem, which is why
> > > mmu_invalidate_retry_gfn() was added.  Supporting hva-only GPCs made our lives a
> > > little harder, but not horrifically so (there are ordering differences regardless).
> > > 
> > > Woefully incomplete, but I think this is the gist of what you want:
> > 
> > Hm, maybe. It does mean that migration occurring all through memory
> > (indeed, just one at top and bottom of guest memory space) would
> > perturb GPCs which remain present.
> 
> If that happens with a real world VMM, and it's not a blatant VMM goof, then we
> can fix KVM.  The stage-2 page fault path hammers the mmu_notifier retry logic
> far more than GPCs, so if a range-based check is inadequate for some use case,
> then we definitely need to fix both.
> 
> In short, I don't see any reason to invent something different for GPCs.
> 
> > > > @@ -849,6 +837,8 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> > > >         wake = !kvm->mn_active_invalidate_count;
> > > >         spin_unlock(&kvm->mn_invalidate_lock);
> > > >  
> > > > +       gfn_to_pfn_cache_invalidate(kvm, range->start, range->end);
> > > 
> > > We can't do this.  The contract with mmu_notifiers is that secondary MMUs must
> > > unmap the hva before returning from invalidate_range_start(), and must not create
> > > new mappings until invalidate_range_end().

Looking at that assertion harder... where is that rule written? It
seems counter-intuitive to me; that isn't how TLBs work. Another CPU
can populate a TLB entry right up to the moment the PTE is actually
*changed* in the page tables, and then the CPU which is
modifying/zapping the PTE needs to perform a remote TLB flush. That
remote TLB flush is analogous to the invalidate_range_end() call,
surely?

I'm fairly sure that's how it works for PASID support too; nothing
prevents the IOMMU+device from populating an IOTLB entry until the PTE
is actually changed in the process page tables.

So why can't we do the same for the GPC?

> > But in the context of the GPC, it is only "mapped" when the ->valid bit is set. 
> > 
> > Even the invalidation callback just clears the valid bit, and that
> > means nobody is allowed to dereference the ->khva any more. It doesn't
> > matter that the underlying (stale) PFN is still kmapped.
> > 
> > Can we not apply the same logic to the hva_to_pfn_retry() loop? Yes, it
> > might kmap a page that gets removed, but it's not actually created a
> > new mapping if it hasn't set the ->valid bit.
> > 
> > I don't think this version quite meets the constraints, and I might
> > need to hook *both* the start and end notifiers, and might not like it
> > once I get there. But I'll have a go...
> 
> I'm pretty sure you're going to need the range-based retry logic.  KVM can't
> safely set gpc->valid until mn_active_invalidate_count reaches zero, so if a GPC
> refresh comes along after mn_active_invalidate_count has been elevated, it won't
> be able to set gpc->valid until the MADV_DONTNEED storm goes away.  Without
> range-based tracking, there's no way to know if a previous invalidation was
> relevant to the GPC.

If it is indeed the case that KVM can't just behave like a normal TLB,
so it and can't set gpc->valid until mn_active_invalidate_count reaches
zero, it still only needs to *wait* (or spin, maybe). It certainly
doesn't need to keep looping and remapping the same PFN over and over
again, as it does at the moment.

When mn_active_invalidate_count does reach zero, either the young GPC
will have been invalidated by clearing the (to be renamed) ->validating
flag, or it won't have been. If it *has* been invalidated, that's when
hva_to_pfn_retry() needs to go one more time round its full loop.

So it just needs to wait until any pending (relevant) invalidations
have completed, *then* check and potentially loop once more.

And yes, making that *wait* range-based does make some sense, I
suppose. It becomes "wait for gpc->uhva not to be within the range of
kvm->mmu_gpc_invalidate_range_{start,end}."

Except... that range can never shrink *except* when
mn_active_invalidate_count becomes zero, can it? So if we do end up
waiting, the wake condition is *still* just that the count has become
zero. There's already a wakeup in that case, on kvm-
>mn_memslots_update_rcuwait. Can I wait on that?

Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux