On Thu, Jun 08, 2023 at 08:34:10AM +0200, David Hildenbrand wrote: > On 08.06.23 02:02, David Rientjes wrote: > > While people have proposed 1GB THP support in the past, it was nacked, in > > part, because of the suggestion to just use existing 1GB support in > > hugetlb instead :) > > Yes, because I still think that the use for "transparent" (for the user) > nowadays is very limited and not worth the complexity. > > IMHO, what you really want is a pool of large pages that (guarantees about > availability and nodes) and fine control about who gets these pages. That's > what hugetlb provides. > > In contrast to THP, you don't want to allow for > * Partially mmap, mremap, munmap, mprotect them > * Partially sharing then / COW'ing them > * Partially mixing them with other anon pages (MADV_DONTNEED + refault) > * Exclude them from some features KSM/swap > * (swap them out and eventually split them for that) > > Because you don't want to get these pages PTE-mapped by the system *unless* > there is a real reason (HGM, hwpoison) -- you want guarantees. Once such a > page is PTE-mapped, you only want to collapse in place. > > But you don't want special-HGM, you simply want the core to PTE-map them > like a (file) THP. > > IMHO, getting that realized much easier would be if we wouldn't have to care > about some of the hugetlb complexity I raised (MAP_PRIVATE, PMD sharing), > but maybe there is a way ... I favour a more evolutionary than revolutionary approach. That is, I think it's acceptable to add new features to hugetlbfs _if_ they're combined with cleanup work that gets hugetlbfs closer to the main mm. This is why I harp on things like pagewalk that currently need special handling for hugetlb -- that's pointless; they should just be treated as large folios. GUP handles hugetlb separately too, and I'm not sure why. That's not to be confused with "hugetlb must change to be more like the regular mm". Sometimes both are bad, stupid and wrong, and need to be changed. The MM has never had to handle 1GB pages before and, eg, handling mapcount by iterating over each struct page is not sensible because that's 16MB of data just to answer folio_mapcount().