On 2025/1/23 17:55, Oscar Salvador wrote: > On Wed, Jan 22, 2025 at 02:11:51PM +0800, Liu Shixin wrote: >> I found a NULL pointer dereference as followed: >> >> BUG: kernel NULL pointer dereference, address: 0000000000000028 >> #PF: supervisor read access in kernel mode >> #PF: error_code(0x0000) - not-present page >> PGD 0 P4D 0 >> Oops: Oops: 0000 [#1] SMP PTI >> CPU: 5 UID: 0 PID: 5964 Comm: sh Kdump: loaded Not tainted 6.13.0-dirty #20 >> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1. >> RIP: 0010:has_unmovable_pages+0x184/0x360 >> ... >> Call Trace: >> <TASK> >> set_migratetype_isolate+0xd1/0x180 >> start_isolate_page_range+0xd2/0x170 >> alloc_contig_range_noprof+0x101/0x660 >> alloc_contig_pages_noprof+0x238/0x290 >> alloc_gigantic_folio.isra.0+0xb6/0x1f0 >> only_alloc_fresh_hugetlb_folio.isra.0+0xf/0x60 >> alloc_pool_huge_folio+0x80/0xf0 >> set_max_huge_pages+0x211/0x490 >> __nr_hugepages_store_common+0x5f/0xe0 >> nr_hugepages_store+0x77/0x80 >> kernfs_fop_write_iter+0x118/0x200 >> vfs_write+0x23c/0x3f0 >> ksys_write+0x62/0xe0 >> do_syscall_64+0x5b/0x170 >> entry_SYSCALL_64_after_hwframe+0x76/0x7e >> >> As has_unmovable_pages() call folio_hstate() without hugetlb_lock, there >> is a race to free the HugeTLB page between PageHuge() and folio_hstate(). >> There is no need to add hugetlb_lock here as the HugeTLB page can be freed >> in lot of places. So it's enough to unfold folio_hstate() and add a check >> to avoid NULL pointer dereference for hugepage_migration_supported(). >> >> Fixes: 464c7ffbcb16 ("mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.") >> Signed-off-by: Liu Shixin <liushixin2@xxxxxxxxxx> > I wonder whether we should place a comment in hugepage_migration_supported stating > that the hstate _must_ be valid, as we do not perform any sanity check further > down the road. Most of the functions in hugetlb.h imply that hstate is valid, and in fact it does. So maybe it's enough to comment just in the special caller. > > Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Thanks for the review. > >