Re: Mapcount of subpages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 23 Sep 2021, Kirill A. Shutemov wrote:
> On Thu, Sep 23, 2021 at 12:40:14PM +0100, Matthew Wilcox wrote:
> > On Thu, Sep 23, 2021 at 01:15:16AM -0400, Kent Overstreet wrote:
> > > On Thu, Sep 23, 2021 at 04:23:12AM +0100, Matthew Wilcox wrote:
> > > > (compiling that list reminds me that we'll need to sort out mapcount
> > > > on subpages when it comes time to do this.  ask me if you don't know
> > > > what i'm talking about here.)
> > > 
> > > I am curious why we would ever need a mapcount for just part of a page, tell me
> > > more.
> > 
> > I would say Kirill is the expert here.  My understanding:
> > 
> > We have three different approaches to allocating 2MB pages today;
> > anon THP, shmem THP and hugetlbfs.  Hugetlbfs can only be mapped on a
> > 2MB boundary, so it has no special handling of mapcount [1].  Anon THP
> > always starts out as being mapped exclusively on a 2MB boundary, but
> > then it can be split by, eg, munmap().  If it is, then the mapcount in
> > the head page is distributed to the subpages.
> 
> One more complication for anon THP is that it can be shared across fork()
> and one process may split it while other have it mapped with PMD.
> 
> > Shmem THP is the tricky one.  You might have a 2MB page in the page cache,
> > but then have processes which only ever map part of it.  Or you might
> > have some processes mapping it with a 2MB entry and others mapping part
> > or all of it with 4kB entries.  And then someone truncates the file to
> > midway through this page; we split it, and now we need to figure out what
> > the mapcount should be on each of the subpages.  We handle this by using
> > ->mapcount on each subpage to record how many non-2MB mappings there are
> > of that specific page and using ->compound_mapcount to record how many 2MB
> > mappings there are of the entire 2MB page.  Then, when we split, we just
> > need to distribute the compound_mapcount to each page to make it correct.
> > We also have the PageDoubleMap flag to tell us whether anybody has this
> > 2MB page mapped with 4kB entries, so we can skip all the summing of 4kB
> > mapcounts if nobody has done that.
> 
> Possible future complication comes from 1G THP effort. With 1G THP we
> would have whole hierarchy of mapcounts: 1 PUD mapcount, 512 PMD
> mapcounts and 262144 PTE mapcounts. (That's one of the reasons I don't
> think 1G THP is viable.)
> 
> Note that there are places where exact mapcount accounting is critical:
> try_to_unmap() may finish prematurely if we underestimate mapcount and
> overestimating mapcount may lead to superfluous CoW that breaks GUP.

It is critical to know for sure when a page has been completely unmapped:
but that does not need ptes of subpages to be accounted in the _mapcount
field of subpages - they just need to be counted in the compound page's
total_mapcount.

I may be wrong, I never had time to prove it one way or the other: but
I have a growing suspicion that the *only* reason for maintaining tail
_mapcounts separately, is to maintain the NR_FILE_MAPPED count exactly
(in the face of pmd mappings overlapping pte mappings).

NR_FILE_MAPPED being used for /proc/meminfo's "Mapped:" and a couple
of other such stats files, and for a reclaim heuristic in mm/vmscan.c.

Allow ourselves more slack in NR_FILE_MAPPED accounting (either count
each pte as if it mapped the whole THP, or don't count a THP's ptes
at all - you opted for the latter in the "Mlocked:" accounting),
and I suspect subpage _mapcount could be abandoned.

But you have a different point in mind when you refer to superfluous
CoW and GUP: I don't know the score there (and I think we are still in
that halfway zone, since pte CoW was changed to depend on page_count,
but THP CoW still depending on mapcount).

Hugh




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux