Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 1, 2023 at 1:51 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
>
> On Wed, Feb 01, 2023 at 01:32:21PM -0800, James Houghton wrote:
> > On Wed, Feb 1, 2023 at 8:22 AM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Feb 01, 2023 at 07:45:17AM -0800, James Houghton wrote:
> > > > On Tue, Jan 31, 2023 at 5:24 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Tue, Jan 31, 2023 at 04:24:15PM -0800, James Houghton wrote:
> > > > > > On Mon, Jan 30, 2023 at 1:14 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Mon, Jan 30, 2023 at 10:38:41AM -0800, James Houghton wrote:
> > > > > > > > On Mon, Jan 30, 2023 at 9:29 AM Peter Xu <peterx@xxxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote:
> > > > [snip]
> > > > > > > > > Another way to not use thp mapcount, nor break smaps and similar calls to
> > > > > > > > > page_mapcount() on small page, is to only increase the hpage mapcount only
> > > > > > > > > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter
> > > > > > > > > as leaf or a non-leaf), and the mapcount can be decreased when the pXd
> > > > > > > > > entry is removed (for leaf, it's the same as for now; for HGM, it's when
> > > > > > > > > freeing pgtable of the PUD entry).
> > > > > > > >
> > > > > > > > Right, and this is doable. Also it seems like this is pretty close to
> > > > > > > > the direction Matthew Wilcox wants to go with THPs.
> > > > > > >
> > > > > > > I may not be familiar with it, do you mean this one?
> > > > > > >
> > > > > > > https://lore.kernel.org/all/Y9Afwds%2FJl39UjEp@xxxxxxxxxxxxxxxxxxxx/
> > > > > >
> > > > > > Yep that's it.
> > > > > >
> > > > > > >
> > > > > > > For hugetlb I think it should be easier to maintain rather than any-sized
> > > > > > > folios, because there's the pgtable non-leaf entry to track rmap
> > > > > > > information and the folio size being static to hpage size.
> > > > > > >
> > > > > > > It'll be different to folios where it can be random sized pages chunk, so
> > > > > > > it needs to be managed by batching the ptes when install/zap.
> > > > > >
> > > > > > Agreed. It's probably easier for HugeTLB because they're always
> > > > > > "naturally aligned" and yeah they can't change sizes.
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Something I noticed though, from the implementation of
> > > > > > > > folio_referenced()/folio_referenced_one(), is that folio_mapcount()
> > > > > > > > ought to report the total number of PTEs that are pointing on the page
> > > > > > > > (or the number of times page_vma_mapped_walk returns true). FWIW,
> > > > > > > > folio_referenced() is never called for hugetlb folios.
> > > > > > >
> > > > > > > FWIU folio_mapcount is the thing it needs for now to do the rmap walks -
> > > > > > > it'll walk every leaf page being mapped, big or small, so IIUC that number
> > > > > > > should match with what it expects to see later, more or less.
> > > > > >
> > > > > > I don't fully understand what you mean here.
> > > > >
> > > > > I meant the rmap_walk pairing with folio_referenced_one() will walk all the
> > > > > leaves for the folio, big or small.  I think that will match the number
> > > > > with what got returned from folio_mapcount().
> > > >
> > > > See below.
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > But I agree the mapcount/referenced value itself is debatable to me, just
> > > > > > > like what you raised in the other thread on page migration.  Meanwhile, I
> > > > > > > am not certain whether the mapcount is accurate either because AFAICT the
> > > > > > > mapcount can be modified if e.g. new page mapping established as long as
> > > > > > > before taking the page lock later in folio_referenced().
> > > > > > >
> > > > > > > It's just that I don't see any severe issue either due to any of above, as
> > > > > > > long as that information is only used as a hint for next steps, e.g., to
> > > > > > > swap which page out.
> > > > > >
> > > > > > I also don't see a big problem with folio_referenced() (and you're
> > > > > > right that folio_mapcount() can be stale by the time it takes the
> > > > > > folio lock). It still seems like folio_mapcount() should return the
> > > > > > total number of PTEs that map the page though. Are you saying that
> > > > > > breaking this would be ok?
> > > > >
> > > > > I didn't quite follow - isn't that already doing so?
> > > > >
> > > > > folio_mapcount() is total_compound_mapcount() here, IIUC it is an
> > > > > accumulated value of all possible PTEs or PMDs being mapped as long as it's
> > > > > all or part of the folio being mapped.
> > > >
> > > > We've talked about 3 ways of handling mapcount:
> > > >
> > > > 1. The RFC v2 way, which is head-only, and we increment the compound
> > > > mapcount for each PT mapping we have. So a PTE-mapped 2M page,
> > > > compound_mapcount=512, subpage->_mapcount=0 (ignoring the -1 bias).
> > > > 2. The THP-like way. If we are fully mapping the hugetlb page with the
> > > > hstate-level PTE, we increment the compound mapcount, otherwise we
> > > > increment subpage->_mapcount.
> > > > 3. The RFC v1 way (the way you have suggested above), which is
> > > > head-only, and we increment the compound mapcount if the hstate-level
> > > > PTE is made present.
> > >
> > > Oh that's where it come from!  It took quite some months going through all
> > > these, I can hardly remember the details.
> > >
> > > >
> > > > With #1 and #2, there is no concern with folio_mapcount(). But with
> > > > #3, folio_mapcount() for a PTE-mapped 2M page mapped in a single VMA
> > > > would yield 1 instead of 512 (right?). That's what I mean.
> > > >
> > > > #1 has problems wrt smaps and migration (though there were other
> > > > problems with those anyway that Mike has fixed), and #2 makes
> > > > MADV_COLLAPSE slow to the point of being unusable for some
> > > > applications.
> > >
> > > Ah so you're talking about after HGM being applied..  while I was only
> > > talking about THPs.
> > >
> > > If to apply the logic here with idea 3), the worst case is we'll need to
> > > have special care of HGM hugetlb in folio_referenced_one(), so the default
> > > page_vma_mapped_walk() may not apply anymore - the resource is always in
> > > hstate sized, so counting small ptes do not help too - we can just walk
> > > until the hstate entry and do referenced++ if it's not none, at the
> > > entrance of folio_referenced_one().
> > >
> > > But I'm not sure whether that'll be necessary at all, as I'm not sure
> > > whether that path can be triggered at all in any form (where from the top
> > > it should always be shrink_page_list()).  In that sense maybe we can also
> > > consider adding a WARN_ON_ONCE() in folio_referenced() where it is a
> > > hugetlb page that got passed in?  Meanwhile, adding a TODO comment
> > > explaining that current walk won't work easily for HGM only, so when it
> > > will be applicable to hugetlb we need to rework?
> > >
> > > I confess that's not pretty, though.  But that'll make 3) with no major
> > > defect from function-wise.
> >
> > Another potential idea would be to add something like page_vmacount().
> > For non-HugeTLB pages, page_vmacount() == page_mapcount(). Then for
> > HugeTLB pages, we could keep a separate count (in one of the tail
> > pages, I guess). And then in the places that matter (so smaps,
> > migration, and maybe CoW and hwpoison), potentially change their calls
> > to page_vmacount() instead of page_mapcount().
> >
> > Then to implement page_vmacount(), we do the RFC v1 mapcount approach
> > (but like.... correctly this time). And then for page_mapcount(), we
> > do the RFC v2 mapcount approach (head-only, once per PTE).
> >
> > Then we fix folio_referenced() without needing to special-case it for
> > HugeTLB. :) Or we could just special-case it. *shrug*
> >
> > Does that sound reasonable? We still have the problem where a series
> > of partially unmaps could leave page_vmacount() incremented, but I
> > don't think that's a big problem.
>
> I'm afraid someone will stop you from introducing yet another definition of
> mapcount, where others are trying to remove it. :)
>
> Or, can we just drop folio_referenced_arg.mapcount?  We need to keep:
>
>         if (!pra.mapcount)
>                 return 0;
>
> By replacing it with folio_mapcount which is definitely something
> worthwhile, but what about the rest?
>
> If it can be dropped, afaict it'll naturally work with HGM again.
>
> IIUC that's an optimization where we want to stop the rmap walk as long as
> we found all the pages, however (1) IIUC it's not required to function, and
> (2) it's not guaranteed to work as solid anyway.. As we've discussed
> before: right after it reads mapcount (but before taking the page lock),
> the mapcount can get decreased by 1, then it'll still need to loop over all
> the vmas just to find that there's one "misterious" mapcount lost.
>
> Personally I really have no idea on how much that optimization can help.

Ok, yeah, I think pra.mapcount can be removed too. (And we replace
!pra.mapcount with !folio_mapcount().)

I don't see any other existing users of folio_mapcount() and
total_mapcount() that are problematic. We do need to make sure to keep
refcount and mapcount in sync though; it can be done.

So I'll compare this "RFC v1" way with the THP-like way and get you a
performance comparison.


- James




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux