Re: [PATCH v5 4/9] mm: Add test_clear_young_fast_only MMU notifier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 28, 2024 at 7:38 PM James Houghton <jthoughton@xxxxxxxxxx> wrote:
>
> On Mon, Jun 17, 2024 at 11:37 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >
> > On Mon, Jun 17, 2024, James Houghton wrote:
> > > On Fri, Jun 14, 2024 at 4:17 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > > Ooh!  Actually, after fiddling a bit to see how feasible fast-aging in the shadow
> > > > MMU would be, I'm pretty sure we can do straight there for nested TDP.  Or rather,
> > > > I suspect/hope we can get close enough for an initial merge, which would allow
> > > > aging_is_fast to be a property of the mmu_notifier, i.e. would simplify things
> > > > because KVM wouldn't need to communicate MMU_NOTIFY_WAS_FAST for each notification.
> > > >
> > > > Walking KVM's rmaps requires mmu_lock because adding/removing rmap entries is done
> > > > in such a way that a lockless walk would be painfully complex.  But if there is
> > > > exactly _one_ rmap entry for a gfn, then slot->arch.rmap[...] points directly at
> > > > that one SPTE.  And with nested TDP, unless L1 is doing something uncommon, e.g.
> > > > mapping the same page into multiple L2s, that overwhelming vast majority of rmaps
> > > > have only one entry.  That's not the case for legacy shadow paging because kernels
> > > > almost always map a pfn using multiple virtual addresses, e.g. Linux's direct map
> > > > along with any userspace mappings.
>
> Hi Sean, sorry for taking so long to get back to you.
>
> So just to make sure I have this right: if L1 is using TDP, the gfns
> in L0 will usually only be mapped by a single spte. If L1 is not using
> TDP, then all bets are off. Is that true?
>
> If that is true, given that we don't really have control over whether
> or not L1 decides to use TDP, the lockless shadow MMU walk will work,
> but, if L1 is not using TDP, it will often return false negatives
> (says "old" for an actually-young gfn). So then I don't really
> understand conditioning the lockless shadow MMU walk on us (L0) using
> the TDP MMU[1]. We care about L1, right?

Ok I think I understand now. If L1 is using shadow paging, L2 is
accessing memory the same way L1 would, so we use the TDP MMU at L0
for this case (if tdp_mmu_enabled). If L1 is using TDP, then we must
use the shadow MMU, so that's the interesting case.

> (Maybe you're saying that, when the TDP MMU is enabled, the only cases
> where the shadow MMU is used are cases where gfns are practically
> always mapped by a single shadow PTE. This isn't how I understood your
> mail, but this is what your hack-a-patch[1] makes me think.)

So it appears that this interpretation is actually what you meant.

>
> [1] https://lore.kernel.org/linux-mm/ZmzPoW7K5GIitQ8B@xxxxxxxxxx/
>
> >
> > ...
> >
> > > Hmm, interesting. I need to spend a little bit more time digesting this.
> > >
> > > Would you like to see this included in v6? (It'd be nice to avoid the
> > > WAS_FAST stuff....) Should we leave it for a later series? I haven't
> > > formed my own opinion yet.
> >
> > I would say it depends on the viability and complexity of my idea.  E.g. if it
> > pans out more or less like my rough sketch, then it's probably worth taking on
> > the extra code+complexity in KVM to avoid the whole WAS_FAST goo.
> >
> > Note, if we do go this route, the implementation would need to be tweaked to
> > handle the difference in behavior between aging and last-minute checks for eviction,
> > which I obviously didn't understand when I threw together that hack-a-patch.
> >
> > I need to think more about how best to handle that though, e.g. skipping GFNs with
> > multiple mappings is probably the worst possible behavior, as we'd risk evicting
> > hot pages.  But falling back to taking mmu_lock for write isn't all that desirable
> > either.
>
> I think falling back to the write lock is more desirable than evicting
> a young page.
>
> I've attached what I think could work, a diff on top of this series.
> It builds at least. It uses rcu_read_lock/unlock() for
> walk_shadow_page_lockless_begin/end(NULL), and it puts a
> synchronize_rcu() in kvm_mmu_commit_zap_page().
>
> It doesn't get rid of the WAS_FAST things because it doesn't do
> exactly what [1] does. It basically makes three calls now: lockless
> TDP MMU, lockless shadow MMU, locked shadow MMU. It only calls the
> locked shadow MMU bits if the lockless bits say !young (instead of
> being conditioned on tdp_mmu_enabled). My choice is definitely
> questionable for the clear path.

I still don't think we should get rid of the WAS_FAST stuff.

The assumption that the L1 VM will almost never share pages between L2
VMs is questionable. The real question becomes: do we care to have
accurate age information for this case? I think so.

It's not completely trivial to get the lockless walking of the shadow
MMU rmaps correct either (please see the patch I attached here[1]).
And the WAS_FAST functionality isn't even that complex to begin with.

Thanks for your patience.

[1]: https://lore.kernel.org/linux-mm/CADrL8HW=kCLoWBwoiSOCd8WHFvBdWaguZ2ureo4eFy9D67+owg@xxxxxxxxxxxxxx/





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux