Re: Mapcount of subpages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 23, 2021 at 12:40:14PM +0100, Matthew Wilcox wrote:
> On Thu, Sep 23, 2021 at 01:15:16AM -0400, Kent Overstreet wrote:
> > On Thu, Sep 23, 2021 at 04:23:12AM +0100, Matthew Wilcox wrote:
> > > (compiling that list reminds me that we'll need to sort out mapcount
> > > on subpages when it comes time to do this.  ask me if you don't know
> > > what i'm talking about here.)
> > 
> > I am curious why we would ever need a mapcount for just part of a page, tell me
> > more.
> 
> I would say Kirill is the expert here.  My understanding:
> 
> We have three different approaches to allocating 2MB pages today;
> anon THP, shmem THP and hugetlbfs.  Hugetlbfs can only be mapped on a
> 2MB boundary, so it has no special handling of mapcount [1].  Anon THP
> always starts out as being mapped exclusively on a 2MB boundary, but
> then it can be split by, eg, munmap().  If it is, then the mapcount in
> the head page is distributed to the subpages.

One more complication for anon THP is that it can be shared across fork()
and one process may split it while other have it mapped with PMD.

> Shmem THP is the tricky one.  You might have a 2MB page in the page cache,
> but then have processes which only ever map part of it.  Or you might
> have some processes mapping it with a 2MB entry and others mapping part
> or all of it with 4kB entries.  And then someone truncates the file to
> midway through this page; we split it, and now we need to figure out what
> the mapcount should be on each of the subpages.  We handle this by using
> ->mapcount on each subpage to record how many non-2MB mappings there are
> of that specific page and using ->compound_mapcount to record how many 2MB
> mappings there are of the entire 2MB page.  Then, when we split, we just
> need to distribute the compound_mapcount to each page to make it correct.
> We also have the PageDoubleMap flag to tell us whether anybody has this
> 2MB page mapped with 4kB entries, so we can skip all the summing of 4kB
> mapcounts if nobody has done that.

Possible future complication comes from 1G THP effort. With 1G THP we
would have whole hierarchy of mapcounts: 1 PUD mapcount, 512 PMD
mapcounts and 262144 PTE mapcounts. (That's one of the reasons I don't
think 1G THP is viable.)

Note that there are places where exact mapcount accounting is critical:
try_to_unmap() may finish prematurely if we underestimate mapcount and
overestimating mapcount may lead to superfluous CoW that breaks GUP.

> 
> [1] Mike is looking to change this, but I'm not sure where he is with it.
> 

-- 
 Kirill A. Shutemov



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux