Re: folio mapcount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16.12.21 16:54, Matthew Wilcox wrote:
> On Thu, Dec 16, 2021 at 11:19:17AM -0400, Jason Gunthorpe wrote:
>> On Thu, Dec 16, 2021 at 01:56:57PM +0000, Matthew Wilcox wrote:
>>> p = mmap(x, 2MB, PROT_READ|PROT_WRITE, ...): THP allocated
>>> mprotect(p, 4KB, PROT_READ): THP split.
>>>
>>> And in that case, I would say the THP now has mapcount of 2 because
>>> there are 2 VMAs mapping it.
>>
>> At least today mapcount is only loosely connected to VMAs. It really
>> counts the number of PUD/PTEs that point at parts of the memory. 
> 
> Careful.  Currently, you need to distinguish between total_mapcount(),
> page_trans_huge_mapcount() and page_mapcount().  Take a look at
> __page_mapcount() to be sure you really know what the mapcount "really"
> counts today ...

Yes, and the documentation above page_trans_huge_mapcount() tries to
bring some clarity. Tries :)

> 
> (also I'm going to assume that when you said PUD you really mean
> PMD throughout)
> 
>> If, under the PTL, you observe a mapcount of 1 then you know that the
>> PUD/PTE you have under lock is the ONLY PUD/PTE that refers to this
>> page and will remain so while the lock is held.
>>
>> So, today the above ends up with a mapcount of 1 and when we take a
>> COW fault we can re-use the page.
>>
>> If the above ends up with a mapcount of 2 then COW will copy not
>> re-use, which will cause unexpected data corruption in all those
>> annoying side cases.
> 
> As I understood David's presentation yesterday, we actually have
> data corruption issues in all the annoying side cases with THPs
> in current upstream, so that's no worse than we have now.  But let's
> see if we can avoid them.

Right, because the refcount is even more shaky ...

> 
> It feels like what we want from a COW perspective is a count of the
> number of MMs mapping a page, not the number of VMAs, PTEs or PMDs
> mapping the page.  Right?
> 
> So here's a corner case ...
> 
> p = mmap(x, 2MB, PROT_READ|PROT_WRITE, ...): THP allocated
> mremap(p + 128K, 128K, 128K, MREMAP_MAYMOVE | MREMAP_FIXED, p + 2MB):
> PMD split
> 

(busy preparing and testing related patches, so I only skimmed over the
discussion)

Whenever we have to go through an internal munmap (mmap, munmap,
mremap), we would split the PMD and map the remainder using PTE. We
place the huge page on the deferred split queue, where the actual
compound page will get split ("THP split").

In move_page_tables() we perform the split_huge_pmd() as well, which
would trigger in your example I think.


For anon pages, IIRC, there is no way to get more than one mapping per
process for a single base page. "sharing" as in "shared anonymous pages"
only applies between processes, not VMAs.

One anon base page can only be mapped once into a process ever. An anon
base page can be mapped shared into multiple processes.

"The function returns the highest mapcount any one of the subpages
has. If the return value is one, even if different processes are
mapping different subpages of the transparent hugepage, they can all
reuse it, because each process is reusing a different subpage."

So if you see "at least one subpage is mapped by more than one" and the
page is anon shared, you have to split the PMD and trigger unsharing for
exactly that subpage.

But it is indeed confusing ...

> Should mapcount be 1 or 2 at this point?  Does the answer change if it's

The PMD was split. Each subpage is mapped exactly once.

page_trans_huge_mapcount() is supposed to return 1 because there is no
sharing.

(Famous last words)

-- 
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux