On Mon, May 20, 2024 at 2:46 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Mon, May 20, 2024 at 12:47:49PM -0700, Vishal Moola (Oracle) wrote: > > Replaces 4 calls to compound_head() with one. Also converts > > unmap_hugepage_range() and unmap_ref_private() to take in folios. > > This is great! > > > void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, > > unsigned long start, unsigned long end, > > - struct page *ref_page, zap_flags_t zap_flags) > > + struct folio *ref_folio, zap_flags_t zap_flags) > > { > > struct mm_struct *mm = vma->vm_mm; > > unsigned long address; > > pte_t *ptep; > > pte_t pte; > > spinlock_t *ptl; > > - struct page *page; > > + struct folio *folio; > > struct hstate *h = hstate_vma(vma); > > unsigned long sz = huge_page_size(h); > > I would appreciate some further cleanup ... > > size_t sz = folio_size(folio); > > I think there are further cleanups along those lines, eg > pages_per_huge_page(), hugetlb_mask_last_page(), huge_page_mask(). > Gotcha, I'll look into those and change them in v2.