On 3/20/24 7:02 PM, Johannes Weiner wrote: > Free page accounting currently happens a bit too high up the call > stack, where it has to deal with guard pages, compaction capturing, > block stealing and even page isolation. This is subtle and fragile, > and makes it difficult to hack on the code. > > Now that type violations on the freelists have been fixed, push the > accounting down to where pages enter and leave the freelist. Awesome! > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> Just some nits: > @@ -1314,10 +1349,10 @@ static inline void expand(struct zone *zone, struct page *page, > * Corresponding page table entries will not be touched, > * pages will stay not present in virtual address space > */ > - if (set_page_guard(zone, &page[size], high, migratetype)) > + if (set_page_guard(zone, &page[size], high)) > continue; > > - add_to_free_list(&page[size], zone, high, migratetype); > + add_to_free_list(&page[size], zone, high, migratetype, false); This is account_freepages() in the hot loop, what if we instead used __add_to_free_list(), sum up nr_pages and called account_freepages() once outside of the loop? > set_buddy_order(&page[size], high); > } > } <snip> > diff --git a/mm/page_isolation.c b/mm/page_isolation.c > index 042937d5abe4..914a71c580d8 100644 > --- a/mm/page_isolation.c > +++ b/mm/page_isolation.c > @@ -252,7 +252,8 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) > * Isolating this block already succeeded, so this > * should not fail on zone boundaries. > */ > - WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, migratetype)); > + WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, > + migratetype)); > } else { > set_pageblock_migratetype(page, migratetype); > __putback_isolated_page(page, order, migratetype); Looks like a drive-by edit of an extra file just to adjust identation.