On 09/06/23 01:48, Matthew Wilcox wrote: > On Tue, Sep 05, 2023 at 02:44:00PM -0700, Mike Kravetz wrote: > > @@ -456,6 +457,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > > unsigned long vmemmap_reuse; > > > > + VM_WARN_ON_ONCE(!PageHuge(head)); > > if (!HPageVmemmapOptimized(head)) > > return 0; > > > > @@ -550,6 +552,7 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > > unsigned long vmemmap_reuse; > > > > + VM_WARN_ON_ONCE(!PageHuge(head)); > > if (!vmemmap_should_optimize(h, head)) > > return; > > Someone who's looking for an easy patch or three should convert both > of these functions to take a folio instead of a page. All callers > pass &folio->page. Obviously do that work on top of Mike's patch set > to avoid creating more work for him. I think Muchun already suggested this. It would make sense as this series is proposing two new routines taking a list of folios: - hugetlb_vmemmap_optimize_folios - hugetlb_vmemmap_restore_folios -- Mike Kravetz