Re: Mapcount of subpages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23 Sep 2021, at 17:54, Yang Shi wrote:

> On Thu, Sep 23, 2021 at 2:10 PM Hugh Dickins <hughd@xxxxxxxxxx> wrote:
>>
>> On Thu, 23 Sep 2021, Kirill A. Shutemov wrote:
>>> On Thu, Sep 23, 2021 at 12:40:14PM +0100, Matthew Wilcox wrote:
>>>> On Thu, Sep 23, 2021 at 01:15:16AM -0400, Kent Overstreet wrote:
>>>>> On Thu, Sep 23, 2021 at 04:23:12AM +0100, Matthew Wilcox wrote:
>>>>>> (compiling that list reminds me that we'll need to sort out mapcount
>>>>>> on subpages when it comes time to do this.  ask me if you don't know
>>>>>> what i'm talking about here.)
>>>>>
>>>>> I am curious why we would ever need a mapcount for just part of a page, tell me
>>>>> more.
>>>>
>>>> I would say Kirill is the expert here.  My understanding:
>>>>
>>>> We have three different approaches to allocating 2MB pages today;
>>>> anon THP, shmem THP and hugetlbfs.  Hugetlbfs can only be mapped on a
>>>> 2MB boundary, so it has no special handling of mapcount [1].  Anon THP
>>>> always starts out as being mapped exclusively on a 2MB boundary, but
>>>> then it can be split by, eg, munmap().  If it is, then the mapcount in
>>>> the head page is distributed to the subpages.
>>>
>>> One more complication for anon THP is that it can be shared across fork()
>>> and one process may split it while other have it mapped with PMD.
>>>
>>>> Shmem THP is the tricky one.  You might have a 2MB page in the page cache,
>>>> but then have processes which only ever map part of it.  Or you might
>>>> have some processes mapping it with a 2MB entry and others mapping part
>>>> or all of it with 4kB entries.  And then someone truncates the file to
>>>> midway through this page; we split it, and now we need to figure out what
>>>> the mapcount should be on each of the subpages.  We handle this by using
>>>> ->mapcount on each subpage to record how many non-2MB mappings there are
>>>> of that specific page and using ->compound_mapcount to record how many 2MB
>>>> mappings there are of the entire 2MB page.  Then, when we split, we just
>>>> need to distribute the compound_mapcount to each page to make it correct.
>>>> We also have the PageDoubleMap flag to tell us whether anybody has this
>>>> 2MB page mapped with 4kB entries, so we can skip all the summing of 4kB
>>>> mapcounts if nobody has done that.
>>>
>>> Possible future complication comes from 1G THP effort. With 1G THP we
>>> would have whole hierarchy of mapcounts: 1 PUD mapcount, 512 PMD
>>> mapcounts and 262144 PTE mapcounts. (That's one of the reasons I don't
>>> think 1G THP is viable.)

Maybe we do not need to support triple map. Instead, only allow PUD and PMD
mappings and split 1GB THP to 2MB THPs before a PTE mapping is established.
How likely is a 1GB THP going to be mapped by PUD and PTE entries? I guess
it might be very rare.

>>>
>>> Note that there are places where exact mapcount accounting is critical:
>>> try_to_unmap() may finish prematurely if we underestimate mapcount and
>>> overestimating mapcount may lead to superfluous CoW that breaks GUP.
>>
>> It is critical to know for sure when a page has been completely unmapped:
>> but that does not need ptes of subpages to be accounted in the _mapcount
>> field of subpages - they just need to be counted in the compound page's
>> total_mapcount.
>>
>> I may be wrong, I never had time to prove it one way or the other: but
>> I have a growing suspicion that the *only* reason for maintaining tail
>> _mapcounts separately, is to maintain the NR_FILE_MAPPED count exactly
>> (in the face of pmd mappings overlapping pte mappings).
>>
>> NR_FILE_MAPPED being used for /proc/meminfo's "Mapped:" and a couple
>> of other such stats files, and for a reclaim heuristic in mm/vmscan.c.
>>
>> Allow ourselves more slack in NR_FILE_MAPPED accounting (either count
>> each pte as if it mapped the whole THP, or don't count a THP's ptes
>> at all - you opted for the latter in the "Mlocked:" accounting),
>> and I suspect subpage _mapcount could be abandoned.
>
> AFAIK, partial THP unmap may need the _mapcount information of every
> subpage otherwise the deferred split can't know what subpages could be
> freed.

Could we just scan page tables of a THP during deferred split process
instead? Deferred split is a slow path already, so maybe it can afford
the extra work.

>
>>
>> But you have a different point in mind when you refer to superfluous
>> CoW and GUP: I don't know the score there (and I think we are still in
>> that halfway zone, since pte CoW was changed to depend on page_count,
>> but THP CoW still depending on mapcount).
>>
>> Hugh
>>


--
Best Regards,
Yan, Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux