On Mon, Dec 11, 2023 at 04:56:15PM +0100, David Hildenbrand wrote: > hugetlb rmap handling differs quite a lot from "ordinary" rmap code. > For example, hugetlb currently only supports entire mappings, and treats > any mapping as mapped using a single "logical PTE". Let's move it out > of the way so we can overhaul our "ordinary" rmap. > implementation/interface. > > Let's introduce and use hugetlb_remove_rmap() and remove the hugetlb > code from page_remove_rmap(). This effectively removes one check on the > small-folio path as well. > > Note: all possible candidates that need care are page_remove_rmap() that > pass compound=true. > > Reviewed-by: Yin Fengwei <fengwei.yin@xxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > +++ b/mm/rmap.c > @@ -1482,13 +1482,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, > > VM_BUG_ON_PAGE(compound && !PageHead(page), page); > > - /* Hugetlb pages are not counted in NR_*MAPPED */ > - if (unlikely(folio_test_hugetlb(folio))) { > - /* hugetlb pages are always mapped with pmds */ > - atomic_dec(&folio->_entire_mapcount); > - return; > - } Maybe add VM_BUG_ON_FOLIO(folio_test_hugetlb(folio), folio);