On 2022/9/8 11:06, HORIGUCHI NAOYA(堀口 直也) wrote: > On Thu, Sep 08, 2022 at 10:19:03AM +0800, Miaohe Lin wrote: >> On 2022/9/7 20:11, Naoya Horiguchi wrote: > ... >>> >From 8a5c284df732943065d23838090d15c94cd10395 Mon Sep 17 00:00:00 2001 >>> From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> >>> Date: Wed, 7 Sep 2022 20:58:44 +0900 >>> Subject: [PATCH] mm/huge_memory: use pfn_to_online_page() in >>> split_huge_pages_all() >>> >>> NULL pointer dereference is triggered when calling thp split via debugfs >>> on the system with offlined memory blocks. With debug option enabled, >>> the following kernel messages are printed out: >>> >>> page:00000000467f4890 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x121c000 >>> flags: 0x17fffc00000000(node=0|zone=2|lastcpupid=0x1ffff) >>> raw: 0017fffc00000000 0000000000000000 dead000000000122 0000000000000000 >>> raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000 >>> page dumped because: unmovable page >>> page:000000007d7ab72e is uninitialized and poisoned >>> page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) >>> ------------[ cut here ]------------ >>> kernel BUG at include/linux/mm.h:1248! >>> invalid opcode: 0000 [#1] PREEMPT SMP PTI >>> CPU: 16 PID: 20964 Comm: bash Tainted: G I 6.0.0-rc3-foll-numa+ #41 >>> ... >>> RIP: 0010:split_huge_pages_write+0xcf4/0xe30 >>> >>> This shows that page_to_nid() in page_zone() is unexpectedly called for an >>> offlined memmap. >>> >>> Use pfn_to_online_page() to get struct page in PFN walker. >> >> With changes proposed by David, this patch looks good to me. >> >> Reviewed-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> > > Thank you. > >> >> BTW: IMHO, there might be many similar code places need to take care of memory hotremove where >> *pfn is operated directly* while it's not protected against memory hotremove. > > I had the similar concern, but there seems many place doing PFN walk, > so checking them one-by-one that offlined memory can be walked over > requires much effort. Yes, that will be a heavy work. We could fix them one by one if they ever occur. ;) Thanks, Miaohe Lin > > Thanks, > Naoya Horiguchi >