Re: Folio mapcount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7 Feb 2023, at 11:51, Matthew Wilcox wrote:

> On Tue, Feb 07, 2023 at 11:23:31AM -0500, Zi Yan wrote:
>> On 24 Jan 2023, at 13:13, Matthew Wilcox wrote:
>>
>>> Once we get to the part of the folio journey where we have
>>> one-pointer-per-page, we can't afford to maintain per-page state.
>>> Currently we maintain a per-page mapcount, and that will have to go.
>>> We can maintain extra state for a multi-page folio, but it has to be a
>>> constant amount of extra state no matter how many pages are in the folio.
>>>
>>> My proposal is that we maintain a single mapcount per folio, and its
>>> definition is the number of (vma, page table) tuples which have a
>>> reference to any pages in this folio.
>>
>> How about having two, full_folio_mapcount and partial_folio_mapcount?
>> If partial_folio_mapcount is 0, we can have a fast path without doing
>> anything at page level.
>
> A fast path for what?  I don't understand your vision; can you spell it
> out for me?  My current proposal is here:

A fast code path for only handling folios as a whole. For cases that
subpages are mapped from a folio, traversing through subpages might be
needed and will be slow. A code separation might be cleaner and makes
folio as a whole handling quicker.

For your proposal, "How many VMAs have one-or-more pages of this folio mapped"
should be the responsibility of rmap. We could add a counter to rmap
instead. It seems that you are mixing page table mapping with virtual
address space (VMA) mapping together.

>
> https://lore.kernel.org/linux-mm/Y+FkV4fBxHlp6FTH@xxxxxxxxxxxxxxxxxxxx/
>
> The three questions we need to be able to answer (in my current
> understanding) are laid out here:
>
> https://lore.kernel.org/linux-mm/Y+HblAN5bM1uYD2f@xxxxxxxxxxxxxxxxxxxx/

I think we probably need to clarify the definition of "map" in your
questions. Does it mean mapped by page tables or VMAs? When a page
is mapped into a VMA, it can be mapped by one or more page table entries,
but not the other way around, right? Or is shared page table entry merged
now so that more than one VMAs can use a single page table entry to map
a folio?

>
> Of course, the vision also needs to include how we account in
> folio_add_(anon|file|new_anon)_rmap() and folio_remove_rmap().
>
>>> I think there's a good performance win and simplification to be had
>>> here, so I think it's worth doing for 6.4.
>>>
>>> Examples
>>> --------
>>>
>>> In the simple and common case where every page in a folio is mapped
>>> once by a single vma and single page table, mapcount would be 1 [1].
>>> If the folio is mapped across a page table boundary by a single VMA,
>>> after we take a page fault on it in one page table, it gets a mapcount
>>> of 1.  After taking a page fault on it in the other page table, its
>>> mapcount increases to 2.
>>>
>>> For a PMD-sized THP naturally aligned, mapcount is 1.  Splitting the
>>> PMD into PTEs would not change the mapcount; the folio remains order-9
>>> but it stll has a reference from only one page table (a different page
>>> table, but still just one).
>>>
>>> Implementation sketch
>>> ---------------------
>>>
>>> When we take a page fault, we can/should map every page in the folio
>>> that fits in this VMA and this page table.  We do this at present in
>>> filemap_map_pages() by looping over each page in the folio and calling
>>> do_set_pte() on each.  We should have a:
>>>
>>>                 do_set_pte_range(vmf, folio, addr, first_page, n);
>>>
>>> and then change the API to page_add_new_anon_rmap() / page_add_file_rmap()
>>> to pass in (folio, first, n) instead of page.  That gives us one call to
>>> page_add_*_rmap() per (vma, page table) tuple.
>>>
>>> In try_to_unmap_one(), page_vma_mapped_walk() currently calls us for
>>> each pfn.  We'll want a function like
>>>         page_vma_mapped_walk_skip_to_end_of_ptable()
>>> in order to persuade it to only call us once or twice if the folio
>>> is mapped across a page table boundary.
>>>
>>> Concerns
>>> --------
>>>
>>> We'll have to be careful to always zap all the PTEs for a given (vma,
>>> pt) tuple at the same time, otherwise mapcount will get out of sync
>>> (eg map three pages, unmap two; we shouldn't decrement the mapcount,
>>> but I don't think we can know that.  But does this ever happen?  I think
>>> we always unmap the entire folio, like in try_to_unmap_one().
>>>
>>> I haven't got my head around SetPageAnonExclusive() yet.  I think it can
>>> be a per-folio bit, but handling a folio split across two page tables
>>> may be tricky.
>>>
>>> Notes
>>> -----
>>>
>>> [1] Ignoring the bias by -1 to let us detect transitions that we care
>>> about more efficiently; I'm talking about the value returned from
>>> page_mapcount(), not the value stored in page->_mapcount.
>>
>>
>> --
>> Best Regards,
>> Yan, Zi


--
Best Regards,
Yan, Zi

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux