On Tue, Nov 28, 2023 at 06:42:44PM +0100, David Hildenbrand wrote: > On 28.11.23 18:13, Peter Xu wrote: > > On Tue, Nov 28, 2023 at 05:39:35PM +0100, David Hildenbrand wrote: > > > Quoting from the cover letter: > > > > > > "We have hugetlb special-casing/checks in the callers in all cases either > > > way already in place: it doesn't make too much sense to call generic-looking > > > functions that end up doing hugetlb specific things from hugetlb > > > special-cases." > > > > I'll take this one as an example: I think one goal (of my understanding of > > the mm community) is to make the generic looking functions keep being > > generic, dropping any function named as "*hugetlb*" if possible one day > > within that generic implementation. I said that in my previous reply. > > Yes, and I am one of the people asking for that. However, only where it > makes sense (e.g., like page table walking, GUP as you said), and only when > it is actually unified. > > I don't think that rmap handling or fault handling will ever be completely > unified to that extreme, and it might also not be desirable. Just like we > have separate paths for anon and file in areas where they are reasonable > different. Yes I haven't check further in that direction, that'll be after the pgtable work for me if that can first move on smoothly, and it'll also depend on whether merging the pgtable changes would be good enough so that we can move on to consider small mappings for hugetlb, until then we need to settle a final mapcount solution for hugetlb. > > What doesn't make sense is using patterns like: > > page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); > > and then, inside page_remove_rmap() have an initial folio_test_hugetlb() > check that does something completely different. IIUC above "folio_test_hugetlb(folio)" pattern can become "false" for hugetlb, if we decided to do mapcount for small hugetlb mappings. If that happens, I think something like this patch _may_ need to be reverted again more or less. Or we start to copy some of page_remove_rmap() into the new hugetlb rmap api. > > So each and everyone calling page_remove_rmap (and knowing that it's > certainly not a hugetlb folio) has to run through that check. Note that right after this change applied, hugetlb already start to lose something in common with generic compound folios, where page_remove_rmap() had: VM_BUG_ON_PAGE(compound && !PageHead(page), page); That sanity check goes away immediately for hugetlb, which is still logically applicable. > > Then, we have functions like page_add_file_rmap() that look like they can be > used for hugetlb, but hugetlb is smart enough and only calls > page_dup_file_rmap(), because it doesn't want to touch any file/anon > counters. And to handle that we would have to add folio_test_hugetlb() > inside there, which adds the same as above, trying to unify without > unifying. > > Once we're in the area of folio_add_file_rmap_range(), it all stops making > sense, because there is no way we could possibly partially map a folio > today. (and if we can in the future, we might still want separate handling, > because most caller know with which pages they are dealing, below) > > Last but not least, it's all inconsistent right now with > hugetlb_add_anon_rmap()/hugetlb_add_new_anon_rmap() being there because they > differ reasonably well from the "ordinary" counterparts. Taking hugepage_add_new_anon_rmap() as example, IMHO they still share a lot of things with !hugetlb codes, and maybe they can already be cleaned up into something common for a large mapping: void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { int nr; VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); if (folio_is_hugetlb(folio)) { folio_clear_hugetlb_restore_reserve(folio); } else { __folio_set_swapbacked(folio); atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); nr = folio_nr_pages(folio); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); } /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); __folio_set_anon(folio, vma, address, true); SetPageAnonExclusive(&folio->page); } For folio_add_file_rmap_range(): would it work if it just pass the hugetlb folio range into it? Would that make it much more difficult with the recent work on large folios from you or anyone? > I don't think going in the other direction and e.g., removing > hugetlb_add_anon_rmap / hugetlb_add_new_anon_rmap is making a unification > that is not really reasonable. It will only make things slower and the > individual functions more complicated. IIUC so far the performance difference should be minimum on which helper to use. As I mentioned, I sincerely don't know whether this patch is good or not as I don't know enough on everything around that is happening. It's just that I'll still think twice if to create hugetlb its own rmap API, because from the high level it's going the other way round to me. So I still want to raise this as a pure question. Thanks, -- Peter Xu