On Tue, Aug 06, 2024, David Woodhouse wrote: > On Mon, 2024-08-05 at 17:45 -0700, Sean Christopherson wrote: > > On Mon, Aug 05, 2024, David Woodhouse wrote: > > > From: David Woodhouse <dwmw@xxxxxxxxxxxx> > > Servicing guest pages faults has the same problem, which is why > > mmu_invalidate_retry_gfn() was added. Supporting hva-only GPCs made our lives a > > little harder, but not horrifically so (there are ordering differences regardless). > > > > Woefully incomplete, but I think this is the gist of what you want: > > Hm, maybe. It does mean that migration occurring all through memory > (indeed, just one at top and bottom of guest memory space) would > perturb GPCs which remain present. If that happens with a real world VMM, and it's not a blatant VMM goof, then we can fix KVM. The stage-2 page fault path hammers the mmu_notifier retry logic far more than GPCs, so if a range-based check is inadequate for some use case, then we definitely need to fix both. In short, I don't see any reason to invent something different for GPCs. > > > @@ -849,6 +837,8 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, > > > wake = !kvm->mn_active_invalidate_count; > > > spin_unlock(&kvm->mn_invalidate_lock); > > > > > > + gfn_to_pfn_cache_invalidate(kvm, range->start, range->end); > > > > We can't do this. The contract with mmu_notifiers is that secondary MMUs must > > unmap the hva before returning from invalidate_range_start(), and must not create > > new mappings until invalidate_range_end(). > > But in the context of the GPC, it is only "mapped" when the ->valid bit is set. > > Even the invalidation callback just clears the valid bit, and that > means nobody is allowed to dereference the ->khva any more. It doesn't > matter that the underlying (stale) PFN is still kmapped. > > Can we not apply the same logic to the hva_to_pfn_retry() loop? Yes, it > might kmap a page that gets removed, but it's not actually created a > new mapping if it hasn't set the ->valid bit. > > I don't think this version quite meets the constraints, and I might > need to hook *both* the start and end notifiers, and might not like it > once I get there. But I'll have a go... I'm pretty sure you're going to need the range-based retry logic. KVM can't safely set gpc->valid until mn_active_invalidate_count reaches zero, so if a GPC refresh comes along after mn_active_invalidate_count has been elevated, it won't be able to set gpc->valid until the MADV_DONTNEED storm goes away. Without range-based tracking, there's no way to know if a previous invalidation was relevant to the GPC.