Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 31, 2023 at 04:24:15PM -0800, James Houghton wrote:
> On Mon, Jan 30, 2023 at 1:14 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
> >
> > On Mon, Jan 30, 2023 at 10:38:41AM -0800, James Houghton wrote:
> > > On Mon, Jan 30, 2023 at 9:29 AM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > >
> > > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote:
> > > > > On Thu, Jan 26, 2023 at 12:31 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > James,
> > > > > >
> > > > > > On Thu, Jan 26, 2023 at 08:58:51AM -0800, James Houghton wrote:
> > > > > > > It turns out that the THP-like scheme significantly slows down
> > > > > > > MADV_COLLAPSE: decrementing the mapcounts for the 4K subpages becomes
> > > > > > > the vast majority of the time spent in MADV_COLLAPSE when collapsing
> > > > > > > 1G mappings. It is doing 262k atomic decrements, so this makes sense.
> > > > > > >
> > > > > > > This is only really a problem because this is done between
> > > > > > > mmu_notifier_invalidate_range_start() and
> > > > > > > mmu_notifier_invalidate_range_end(), so KVM won't allow vCPUs to
> > > > > > > access any of the 1G page while we're doing this (and it can take like
> > > > > > > ~1 second for each 1G, at least on the x86 server I was testing on).
> > > > > >
> > > > > > Did you try to measure the time, or it's a quick observation from perf?
> > > > >
> > > > > I put some ktime_get()s in.
> > > > >
> > > > > >
> > > > > > IIRC I used to measure some atomic ops, it is not as drastic as I thought.
> > > > > > But maybe it depends on many things.
> > > > > >
> > > > > > I'm curious how the 1sec is provisioned between the procedures.  E.g., I
> > > > > > would expect mmu_notifier_invalidate_range_start() to also take some time
> > > > > > too as it should walk the smally mapped EPT pgtables.
> > > > >
> > > > > Somehow this doesn't take all that long (only like 10-30ms when
> > > > > collapsing from 4K -> 1G) compared to hugetlb_collapse().
> > > >
> > > > Did you populate as much the EPT pgtable when measuring this?
> > > >
> > > > IIUC this number should be pretty much relevant to how many pages are
> > > > shadowed to the kvm pgtables.  If the EPT table is mostly empty it should
> > > > be super fast, but OTOH it can be much slower if when it's populated,
> > > > because tdp mmu should need to handle the pgtable leaves one by one.
> > > >
> > > > E.g. it should be fully populated if you have a program busy dirtying most
> > > > of the guest pages during test migration.
> > >
> > > That's what I was doing. I was running a workload in the guest that
> > > just writes 8 bytes to a page and jumps ahead a few pages on all
> > > vCPUs, touching most of its memory.
> > >
> > > But there is more to understand; I'll collect more results. I'm not
> > > sure why the EPT can be unmapped/collapsed so quickly.
> >
> > Maybe something smart done by the hypervisor?
> 
> Doing a little bit more digging, it looks like the
> invalidate_range_start notifier clears the sptes, and then later on
> (on the next EPT violation), the page tables are freed. I still need
> to look at how they end up being so much faster still, but I thought
> that was interesting.
> 
> >
> > >
> > > >
> > > > Write op should be the worst here case since it'll require the atomic op
> > > > being applied; see kvm_tdp_mmu_write_spte().
> > > >
> > > > >
> > > > > >
> > > > > > Since we'll still keep the intermediate levels around - from application
> > > > > > POV, one other thing to remedy this is further shrink the size of COLLAPSE
> > > > > > so potentially for a very large page we can start with building 2M layers.
> > > > > > But then collapse will need to be run at least two rounds.
> > > > >
> > > > > That's exactly what I thought to do. :) I realized, too, that this is
> > > > > actually how userspace *should* collapse things to avoid holding up
> > > > > vCPUs too long. I think this is a good reason to keep intermediate
> > > > > page sizes.
> > > > >
> > > > > When collapsing 4K -> 1G, the mapcount scheme doesn't actually make a
> > > > > huge difference: the THP-like scheme is about 30% slower overall.
> > > > >
> > > > > When collapsing 4K -> 2M -> 1G, the mapcount scheme makes a HUGE
> > > > > difference. For the THP-like scheme, collapsing 4K -> 2M requires
> > > > > decrementing and then re-incrementing subpage->_mapcount, and then
> > > > > from 2M -> 1G, we have to decrement all 262k subpages->_mapcount. For
> > > > > the head-only scheme, for each 2M in the 4K -> 2M collapse, we
> > > > > decrement the compound_mapcount 512 times (once per PTE), then
> > > > > increment it once. And then for 2M -> 1G, for each 1G, we decrement
> > > > > mapcount again by 512 (once per PMD), incrementing it once.
> > > >
> > > > Did you have quantified numbers (with your ktime treak) to compare these?
> > > > If we want to go the other route, I think these will be materials to
> > > > justify any other approach on mapcount handling.
> > >
> > > Ok, I can do that. GIve me a couple days to collect more results and
> > > organize them in a helpful way.
> > >
> > > (If it's helpful at all, here are some results I collected last week:
> > > [2]. Please ignore it if it's not helpful.)
> >
> > It's helpful already at least to me, thanks.  Yes the change is drastic.
> 
> That data only contains THP-like mapcount performance, no performance
> for the head-only way. But the head-only scheme makes the 2M -> 1G
> very good ("56" comes down to about the same everything else, instead
> of being ~100-500x bigger).

Oops, I think I misread those.  Yeah please keep sharing information if you
come up with any.

> 
> >
> > >
> > > >
> > > > >
> > > > > The mapcount decrements are about on par with how long it takes to do
> > > > > other things, like updating page tables. The main problem is, with the
> > > > > THP-like scheme (implemented like this [1]), there isn't a way to
> > > > > avoid the 262k decrements when collapsing 1G. So if we want
> > > > > MADV_COLLAPSE to be fast and we want a THP-like page_mapcount() API,
> > > > > then I think something more clever needs to be implemented.
> > > > >
> > > > > [1]: https://github.com/48ca/linux/blob/hgmv2-jan24/mm/hugetlb.c#L127-L178
> > > >
> > > > I believe the whole goal of HGM is trying to face the same challenge if
> > > > we'll allow 1G THP exist and being able to split for anon.
> > > >
> > > > I don't remember whether we discussed below, maybe we did?  Anyway...
> > > >
> > > > Another way to not use thp mapcount, nor break smaps and similar calls to
> > > > page_mapcount() on small page, is to only increase the hpage mapcount only
> > > > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter
> > > > as leaf or a non-leaf), and the mapcount can be decreased when the pXd
> > > > entry is removed (for leaf, it's the same as for now; for HGM, it's when
> > > > freeing pgtable of the PUD entry).
> > >
> > > Right, and this is doable. Also it seems like this is pretty close to
> > > the direction Matthew Wilcox wants to go with THPs.
> >
> > I may not be familiar with it, do you mean this one?
> >
> > https://lore.kernel.org/all/Y9Afwds%2FJl39UjEp@xxxxxxxxxxxxxxxxxxxx/
> 
> Yep that's it.
> 
> >
> > For hugetlb I think it should be easier to maintain rather than any-sized
> > folios, because there's the pgtable non-leaf entry to track rmap
> > information and the folio size being static to hpage size.
> >
> > It'll be different to folios where it can be random sized pages chunk, so
> > it needs to be managed by batching the ptes when install/zap.
> 
> Agreed. It's probably easier for HugeTLB because they're always
> "naturally aligned" and yeah they can't change sizes.
> 
> >
> > >
> > > Something I noticed though, from the implementation of
> > > folio_referenced()/folio_referenced_one(), is that folio_mapcount()
> > > ought to report the total number of PTEs that are pointing on the page
> > > (or the number of times page_vma_mapped_walk returns true). FWIW,
> > > folio_referenced() is never called for hugetlb folios.
> >
> > FWIU folio_mapcount is the thing it needs for now to do the rmap walks -
> > it'll walk every leaf page being mapped, big or small, so IIUC that number
> > should match with what it expects to see later, more or less.
> 
> I don't fully understand what you mean here.

I meant the rmap_walk pairing with folio_referenced_one() will walk all the
leaves for the folio, big or small.  I think that will match the number
with what got returned from folio_mapcount().

> 
> >
> > But I agree the mapcount/referenced value itself is debatable to me, just
> > like what you raised in the other thread on page migration.  Meanwhile, I
> > am not certain whether the mapcount is accurate either because AFAICT the
> > mapcount can be modified if e.g. new page mapping established as long as
> > before taking the page lock later in folio_referenced().
> >
> > It's just that I don't see any severe issue either due to any of above, as
> > long as that information is only used as a hint for next steps, e.g., to
> > swap which page out.
> 
> I also don't see a big problem with folio_referenced() (and you're
> right that folio_mapcount() can be stale by the time it takes the
> folio lock). It still seems like folio_mapcount() should return the
> total number of PTEs that map the page though. Are you saying that
> breaking this would be ok?

I didn't quite follow - isn't that already doing so?

folio_mapcount() is total_compound_mapcount() here, IIUC it is an
accumulated value of all possible PTEs or PMDs being mapped as long as it's
all or part of the folio being mapped.

-- 
Peter Xu





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux