On 14.08.19 17:41, David Hildenbrand wrote: > Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher > order") assumed that any PFN we get via memory resources is aligned to > to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe, > check the alignment and fallback to single pages. > > Cc: Arun KS <arunks@xxxxxxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Oscar Salvador <osalvador@xxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/memory_hotplug.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 63b1775f7cf8..f245fb50ba7f 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages, > */ > for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) { > order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn))); > + /* __free_pages_core() wants pfns to be aligned to the order */ > + if (unlikely(!IS_ALIGNED(pfn, 1ul << order))) > + order = 0; > (*online_page_callback)(pfn_to_page(pfn), order); > } > > @Michal, if you insist, we can drop this patch. "break first and fix later" is not part of my DNA :) -- Thanks, David / dhildenb