On Tue, Mar 23, 2021 at 09:12:15PM +0800, Liu Shixin wrote: > From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > > The start_pfn and end_pfn are already available in move_freepages_block(), > there is no need to go back and forth between page and pfn in move_freepages > and move_freepages_block, and pfn_valid_within() should validate pfn first > before touching the page. This looks good to me: Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > static int move_freepages(struct zone *zone, > - struct page *start_page, struct page *end_page, > + unsigned long start_pfn, unsigned long end_pfn, > int migratetype, int *num_movable) > { > struct page *page; > + unsigned long pfn; > unsigned int order; > int pages_moved = 0; > > - for (page = start_page; page <= end_page;) { > - if (!pfn_valid_within(page_to_pfn(page))) { > - page++; > + for (pfn = start_pfn; pfn <= end_pfn;) { > + if (!pfn_valid_within(pfn)) { > + pfn++; > continue; > } > > + page = pfn_to_page(pfn); I wonder if this wouldn't be even better if we did: struct page *start_page = pfn_to_page(start_pfn); for (pfn = start_pfn; pfn <= end_pfn; pfn++) { struct page *page = start_page + pfn - start_pfn; if (!pfn_valid_within(pfn)) continue; > - > - page++; > + pfn++; > continue; ... then we can drop the increment of pfn here > } > > @@ -2458,7 +2459,7 @@ static int move_freepages(struct zone *zone, > > order = buddy_order(page); > move_to_free_list(page, zone, order, migratetype); > - page += 1 << order; > + pfn += 1 << order; ... and change this to pfn += (1 << order) - 1; Do you have any numbers to quantify the benefit of this change?