On Thu, Jun 21, 2018 at 11:12 PM, Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > On Thu, Jun 21, 2018 at 11:08 PM, Naoya Horiguchi > <n-horiguchi@xxxxxxxxxxxxx> wrote: >> Reading /proc/kpageflags for pfns allocated by pmem namespace triggers >> kernel panic with a message like "BUG: unable to handle kernel paging >> request at fffffffffffffffe". >> >> The first few pages (controlled by altmap passed to memmap_init_zone()) >> in the ZONE_DEVICE can skip struct page initialization, which causes >> the reported issue. >> >> This patch simply adds some initialization code for them. >> >> Fixes: 4b94ffdc4163 ("x86, mm: introduce vmem_altmap to augment vmemmap_populate()") >> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> >> --- >> mm/page_alloc.c | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git v4.17-mmotm-2018-06-07-16-59/mm/page_alloc.c v4.17-mmotm-2018-06-07-16-59_patched/mm/page_alloc.c >> index 1772513..0b36afe 100644 >> --- v4.17-mmotm-2018-06-07-16-59/mm/page_alloc.c >> +++ v4.17-mmotm-2018-06-07-16-59_patched/mm/page_alloc.c >> @@ -5574,8 +5574,16 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >> * Honor reservation requested by the driver for this ZONE_DEVICE >> * memory >> */ >> - if (altmap && start_pfn == altmap->base_pfn) >> + if (altmap && start_pfn == altmap->base_pfn) { >> + unsigned long i; >> + >> + for (i = 0; i < altmap->reserve; i++) { >> + page = pfn_to_page(start_pfn + i); >> + __init_single_page(page, start_pfn + i, zone, nid); >> + SetPageReserved(page); >> + } >> start_pfn += altmap->reserve; >> + } > > No, unfortunately this will clobber metadata that lives in that > reserved area, see __nvdimm_setup_pfn(). I think the kpageflags code needs to lookup the dev_pagemap in the ZONE_DEVICE case and honor the altmap.