Re: [LSF/MM/BPF TOPIC] MM: Mapcount Madness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29.01.24 14:49, Matthew Wilcox wrote:
On Mon, Jan 29, 2024 at 01:05:04PM +0100, David Hildenbrand wrote:
As PTE-mapped large folios become more relevant (mTHP [1]) and there is the
desire to shrink the metadata allocated for such large folios as well
(memdesc [2]), how we track folio mappings gets more relevant. Over the
years, we used folio mapping information to answer various questions: is
this folio mapped by somebody else? do we have to COW on write fault? how do
we adjust memory statistics? ...

Let's talk about ongoing work in the mapcount area, get a common
understanding of what the users of the different mapcounts are and what the
implications of removing some would be: which questions could we answer
differently, which questions would we not be able to answer precisely
anymore, and what would be the implications of such changes?

For example, can we tolerate some imprecise memory statistics? How
expressive is the PSS when large folios are only partially mapped? Would we
need a transition period and glue changes to a new CONFIG_ option? Do we
really have to support THP and friends on 32bit?

Excellent topics to cover.  I have some of my own questions ...

Are we in danger of overflowing page refcount too easily?  Pincount
isn't an issue here; we're talking about large folios, so pincount gets
its own field.  But with tracking one mapcount per PTE mapping of a
folio, we can easily increment a PMD-sized folio's refcount by 512
per VMA.  Now we only need 2^22 VMAs to hit the 2^31 limit before the
page->refcount protections go into effect and operations start failing.

I think we'll definitely either want to detect such overflows early and fail fork/pagefaults/ etc, or if there are sane use cases (2^22 sounds excessive, but we might be getting larger folios ...), much rather have a 64bit refcount for large folios (or any folios for simplicity? TBD) in the future.

And I think, then, once again the question will be: how much time are we willing to invest to support THP and friends on 32bit, and is it really worth it.


How / do we need to track mapcount for pages mapped to userspace which
are neither file-backed, nor anonymous mappings?  eg drivers pass
vmalloc memory to vmf_insert_page() in their ->mmap handler.

As of today, vm_insert_page() and friends end up calling insert_page_into_pte_locked(), which does a

folio_get(folio);
inc_mm_counter(vma->vm_mm, mm_counter_file(folio));
folio_add_file_rmap_pte(folio, page, vma);

That is, we get non-rmappable folios (no pagecache/shmem/anon) in rmap code. It's nonsensical, because the rmap does not apply to such pages (rmap walks won't work, there is no rmap). When I stumbled over that recently, I was guessing that the current handling only exists for simplicity on the munmap/zap path.

IMHO, we shouldn't call rmap code on that path (and similarly, when unmapping). If we want to adjust some mapcounts for some reason, we better do that explicitly.

And that is an excellent topic to discuss.


What do VM_PFNMAP and VM_MIXEDMAP really imply?  The documentation here
is a little sparse.  And that's sad, because I think we expect device
driver writers to use them, and without clear documentation of what
they actually do, they're going to be misused.

Agreed, it's under-documented. In general VM_PFNMAP means "map whatever you want, as long as you make sure it cannot get freed+reused as long as it is still mapped". That is, if the memory was allocated, the driver has to hold a reference, but the core won't be messing with any refcount/mapcount/rmap/stats/ ... treating it like "struct page" wouldn't exist.

VM_MIXEDMAP is the complicated brother that uses "struct page" if it exists.

Another good topic, agreed.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux