On Tue, May 21, 2024 at 2:13 PM Vishal Moola <vishal.moola@xxxxxxxxx> wrote: > > On Mon, May 20, 2024 at 2:46 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > On Mon, May 20, 2024 at 12:47:49PM -0700, Vishal Moola (Oracle) wrote: > > > Replaces 4 calls to compound_head() with one. Also converts > > > unmap_hugepage_range() and unmap_ref_private() to take in folios. > > > > This is great! > > > > > void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, > > > unsigned long start, unsigned long end, > > > - struct page *ref_page, zap_flags_t zap_flags) > > > + struct folio *ref_folio, zap_flags_t zap_flags) > > > { > > > struct mm_struct *mm = vma->vm_mm; > > > unsigned long address; > > > pte_t *ptep; > > > pte_t pte; > > > spinlock_t *ptl; > > > - struct page *page; > > > + struct folio *folio; > > > struct hstate *h = hstate_vma(vma); > > > unsigned long sz = huge_page_size(h); > > > > I would appreciate some further cleanup ... > > > > size_t sz = folio_size(folio); > > > > I think there are further cleanups along those lines, eg > > pages_per_huge_page(), hugetlb_mask_last_page(), huge_page_mask(). > > > > Gotcha, I'll look into those and change them in v2. I took a closer look at your suggestions for cleanups here. Most callers of unmap_hugepage_range() pass NULL as ref_folio - meaning we want to unmap all pages in the range. This means alot of the preparatory work is likely done without a reference to a folio, so using folio_size() is unsafe. For when we later have a reference to a folio, I think we should continue to use the hstate-defined values since using the folio in one place and hstate in another makes things harder to change (if we ever want to).