On 8/13/19 5:37 AM, David Rientjes wrote: > After commit 907ec5fca3dc ("mm: zero remaining unavailable struct pages"), > struct page of reserved memory is zeroed. This causes page->flags to be 0 > and fixes issues related to reading /proc/kpageflags, for example, of > reserved memory. > > The VM_BUG_ON() in move_freepages_block(), however, assumes that > page_zone() is meaningful even for reserved memory. That assumption is no > longer true after the aforementioned commit. How comes that move_freepages_block() gets called on reserved memory in the first place? > There's no reason why move_freepages_block() should be testing the > legitimacy of page_zone() for reserved memory; its scope is limited only > to pages on the zone's freelist. > > Note that pfn_valid() can be true for reserved memory: there is a backing > struct page. The check for page_to_nid(page) is also buggy but reserved > memory normally only appears on node 0 so the zeroing doesn't affect this. > > Move the debug checks to after verifying PageBuddy is true. This isolates > the scope of the checks to only be for buddy pages which are on the zone's > freelist which move_freepages_block() is operating on. In this case, an > incorrect node or zone is a bug worthy of being warned about (and the > examination of struct page is acceptable bcause this memory is not > reserved). > > Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> > --- > mm/page_alloc.c | 19 ++++--------------- > 1 file changed, 4 insertions(+), 15 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2238,27 +2238,12 @@ static int move_freepages(struct zone *zone, > unsigned int order; > int pages_moved = 0; > > -#ifndef CONFIG_HOLES_IN_ZONE > - /* > - * page_zone is not safe to call in this context when > - * CONFIG_HOLES_IN_ZONE is set. This bug check is probably redundant > - * anyway as we check zone boundaries in move_freepages_block(). > - * Remove at a later date when no bug reports exist related to > - * grouping pages by mobility > - */ > - VM_BUG_ON(pfn_valid(page_to_pfn(start_page)) && > - pfn_valid(page_to_pfn(end_page)) && > - page_zone(start_page) != page_zone(end_page)); > -#endif > for (page = start_page; page <= end_page;) { > if (!pfn_valid_within(page_to_pfn(page))) { > page++; > continue; > } > > - /* Make sure we are not inadvertently changing nodes */ > - VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page); > - > if (!PageBuddy(page)) { > /* > * We assume that pages that could be isolated for > @@ -2273,6 +2258,10 @@ static int move_freepages(struct zone *zone, > continue; > } > > + /* Make sure we are not inadvertently changing nodes */ > + VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page); > + VM_BUG_ON_PAGE(page_zone(page) != zone, page); The later check implies the former check, so if it's to stay, the first one could be removed and comment adjusted s/nodes/zones/ > + > order = page_order(page); > move_to_free_area(page, &zone->free_area[order], migratetype); > page += 1 << order; >