On Fri, Jun 11, 2021 at 08:17:02AM -0400, Zi Yan wrote: > On 11 Jun 2021, at 4:34, Mel Gorman wrote: > > > On Thu, Jun 10, 2021 at 07:40:47AM -0400, Zi Yan wrote: > >>>> qemu-system-x86_64 -kernel ~/repos/linux-1gb-thp/arch/x86/boot/bzImage \ > >>>> -drive file=~/qemu-image/vm.qcow2,if=virtio \ > >>>> -append "nokaslr root=/dev/vda1 rw console=ttyS0 " \ > >>>> -pidfile vm.pid \ > >>>> -netdev user,id=mynet0,hostfwd=tcp::11022-:22 \ > >>>> -device virtio-net-pci,netdev=mynet0 \ > >>>> -m 16g -smp 6 -cpu host -enable-kvm -nographic \ > >>>> -machine hmat=on -object memory-backend-ram,size=8g,id=m0 \ > >>>> -object memory-backend-ram,size=8g,id=m1 \ > >>>> -numa node,memdev=m0,nodeid=0 -numa node,memdev=m1,nodeid=1 > >>>> > >>>> The attached config has THP disabled. The VM cannot boot with THP enabled, > >>>> either. > >>>> > >>> > >>> There is not a lot of information to go on here. Can you confirm that a > >>> revert of that specific patch from mmotm-2021-06-07-18-33 also boots? It > >>> sounds like your console log is empty, does anything useful appear if > >>> you add "earlyprintk=serial,ttyS0,115200" to the kernel command line? > >> > >> Sure. I can confirm that reverting the patch makes the VM boot. > >> The important information I forgot to mention is that after I remove > >> the NUMA setting in the QEMU, the VM can boot too. > >> > >> earlyprintk gave the error message (page out of zone boundary) when the VM could not boot: > >> > > > > Can you test with the following patch please? > > > > --8<--- > > mm/page_alloc: Allow high-order pages to be stored on the per-cpu lists -fix > > > > Zi Ya reported the following problem > s/Zi Ya/Zi Yan/ Sorry about that typo. > > > > I am not able to boot my QEMU VM with v5.13-rc5-mmotm-2021-06-07-18-33. > > git bisect points to this patch. The VM got stuck at "Booting from ROM" > > > > "This patch" is "mm/page_alloc: Allow high-order pages to be stored on > > the per-cpu lists" and earlyprintk showed the following > > > > [ 0.161237] Memory: 16396772K/16776684K available (18452K kernel code, 3336K rwdata, 8000K rodata, 1852K init, 1444K bss, 379656K reserved, 0K cma-reserve) > > [ 0.162451] page 0x100041 outside node 1 zone Normal [ 0x240000 - 0x440000 ] > > [ 0.163057] page:(____ptrval____) refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x100041 > > > > The patch is allowing pages from different zones to exist on the PCP > > lists which is not allowed. Review found two problems -- first, the > > bulk allocator is not using the correct PCP lists. It happens to work > > because it's order-0 only but it's wrong. The real problem is that the > > boot pagesets can store free pages which is not allowed. > > > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > --- > > mm/page_alloc.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index d6d90f046c94..8472bae567f0 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3625,7 +3625,15 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, > > int batch = READ_ONCE(pcp->batch); > > int alloced; > > > > - batch = max(batch >> order, 2); > > + /* > > + * Scale batch relative to order if batch implies > > + * free pages can be stored on the PCP. Batch can > > + * be 1 for small zones or for boot pagesets which > > + * should never store free pages as the pages may > > + * belong to arbitrary zones. > > + */ > > + if (batch > 1) > > + batch = max(batch >> order, 2); > > alloced = rmqueue_bulk(zone, order, > > batch, list, > > migratetype, alloc_flags); > > @@ -5265,7 +5273,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > > /* Attempt the batch allocation */ > > local_lock_irqsave(&pagesets.lock, flags); > > pcp = this_cpu_ptr(zone->per_cpu_pageset); > > - pcp_list = &pcp->lists[ac.migratetype]; > > + pcp_list = &pcp->lists[order_to_pindex(ac.migratetype, 0)]; > > > > while (nr_populated < nr_pages) { > > Yes. This patch solves the issue. Thanks. > Thanks. As Andrew dropped the patch from mmotm, I've send a v2 with the fix included. Thanks for reporting and testing! -- Mel Gorman SUSE Labs