On Thu, Jun 13, 2024 at 10:21:08AM +0200, David Hildenbrand wrote: >On 12.06.24 04:04, Wei Yang wrote: >> Function deferred_[init|free]_pages are only used in >> deferred_init_maxorder(), which makes sure the range to init/free is >> within MAX_ORDER_NR_PAGES size. >> >> With this knowledge, we can simplify these two functions. Since >> >> * only the first pfn could be IS_MAX_ORDER_ALIGNED() >> >> Also since the range passed to deferred_[init|free]_pages is always from >> memblock.memory for those we have already allocated memmap to cover, >> pfn_valid() always return true. Then we can remove related check. >> > >I'm surprised that we can completely get rid if the pfn_valid checks (which >is great!), trusting Mike's review that this is all sane :) > I'm surprised too :-) >> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> >> CC: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> >> CC: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> >> CC: David Hildenbrand <david@xxxxxxxxxx> >> --- > >[...] > >> /* >> * Initialize struct pages. We minimize pfn page lookups and scheduler checks >> * by performing it only once every MAX_ORDER_NR_PAGES. >> * Return number of pages initialized. >> */ >> -static unsigned long __init deferred_init_pages(struct zone *zone, >> - unsigned long pfn, >> - unsigned long end_pfn) >> +static unsigned long __init deferred_init_pages(struct zone *zone, >> + unsigned long pfn, >> + unsigned long end_pfn) > >We nowadays prefer double tabs for indentation for the second+ parameter line >in MM, like > >static unsigned long __init deferred_init_pages(struct zone *zone, > unsigned long pfn, unsigned long end_pfn) > >Usually results in less churn when renaming functions ... and reduces the >LOC. > Will adjust it. > >> { >> int nid = zone_to_nid(zone); >> - unsigned long nr_pages = 0; >> + unsigned long nr_pages = end_pfn - pfn; >> int zid = zone_idx(zone); >> - struct page *page = NULL; >> + struct page *page = pfn_to_page(pfn); >> - for (; pfn < end_pfn; pfn++) { >> - if (!deferred_pfn_valid(pfn)) { >> - page = NULL; >> - continue; >> - } else if (!page || IS_MAX_ORDER_ALIGNED(pfn)) { >> - page = pfn_to_page(pfn); >> - } else { >> - page++; >> - } >> + for (; pfn < end_pfn; pfn++, page++) >> __init_single_page(page, pfn, zid, nid); >> - nr_pages++; >> - } >> return nr_pages; >> } >> @@ -2096,7 +2049,7 @@ deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn, >> break; >> t = min(mo_pfn, epfn); >> - deferred_free_pages(spfn, t); >> + deferred_free_pages(spfn, t - spfn); >> if (mo_pfn <= epfn) >> break; > >Looks like a really nice cleanup! > Glad you like it :-) >-- >Cheers, > >David / dhildenb -- Wei Yang Help you, Help me