On Fri, Apr 12, 2024 at 04:34:48PM +0900, Masami Hiramatsu wrote: >On Fri, 12 Apr 2024 10:41:04 +0800 >qiang4.zhang@xxxxxxxxxxxxxxx wrote: > >> From: Qiang Zhang <qiang4.zhang@xxxxxxxxx> >> >> On the time to free xbc memory, memblock has handed over memory to buddy >> allocator. So it doesn't make sense to free memory back to memblock. >> memblock_free() called by xbc_exit() even causes UAF bugs on architectures >> with CONFIG_ARCH_KEEP_MEMBLOCK disabled like x86. Following KASAN logs >> shows this case. >> >> [ 9.410890] ================================================================== >> [ 9.418962] BUG: KASAN: use-after-free in memblock_isolate_range+0x12d/0x260 >> [ 9.426850] Read of size 8 at addr ffff88845dd30000 by task swapper/0/1 >> >> [ 9.435901] CPU: 9 PID: 1 Comm: swapper/0 Tainted: G U 6.9.0-rc3-00208-g586b5dfb51b9 #5 >> [ 9.446403] Hardware name: Intel Corporation RPLP LP5 (CPU:RaptorLake)/RPLP LP5 (ID:13), BIOS IRPPN02.01.01.00.00.19.015.D-00000000 Dec 28 2023 >> [ 9.460789] Call Trace: >> [ 9.463518] <TASK> >> [ 9.465859] dump_stack_lvl+0x53/0x70 >> [ 9.469949] print_report+0xce/0x610 >> [ 9.473944] ? __virt_addr_valid+0xf5/0x1b0 >> [ 9.478619] ? memblock_isolate_range+0x12d/0x260 >> [ 9.483877] kasan_report+0xc6/0x100 >> [ 9.487870] ? memblock_isolate_range+0x12d/0x260 >> [ 9.493125] memblock_isolate_range+0x12d/0x260 >> [ 9.498187] memblock_phys_free+0xb4/0x160 >> [ 9.502762] ? __pfx_memblock_phys_free+0x10/0x10 >> [ 9.508021] ? mutex_unlock+0x7e/0xd0 >> [ 9.512111] ? __pfx_mutex_unlock+0x10/0x10 >> [ 9.516786] ? kernel_init_freeable+0x2d4/0x430 >> [ 9.521850] ? __pfx_kernel_init+0x10/0x10 >> [ 9.526426] xbc_exit+0x17/0x70 >> [ 9.529935] kernel_init+0x38/0x1e0 >> [ 9.533829] ? _raw_spin_unlock_irq+0xd/0x30 >> [ 9.538601] ret_from_fork+0x2c/0x50 >> [ 9.542596] ? __pfx_kernel_init+0x10/0x10 >> [ 9.547170] ret_from_fork_asm+0x1a/0x30 >> [ 9.551552] </TASK> >> >> [ 9.555649] The buggy address belongs to the physical page: >> [ 9.561875] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x45dd30 >> [ 9.570821] flags: 0x200000000000000(node=0|zone=2) >> [ 9.576271] page_type: 0xffffffff() >> [ 9.580167] raw: 0200000000000000 ffffea0011774c48 ffffea0012ba1848 0000000000000000 >> [ 9.588823] raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000 >> [ 9.597476] page dumped because: kasan: bad access detected >> >> [ 9.605362] Memory state around the buggy address: >> [ 9.610714] ffff88845dd2ff00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> [ 9.618786] ffff88845dd2ff80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> [ 9.626857] >ffff88845dd30000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >> [ 9.634930] ^ >> [ 9.638534] ffff88845dd30080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >> [ 9.646605] ffff88845dd30100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >> [ 9.654675] ================================================================== >> > >Oops, good catch! Indeed, it is too late to use memblock_free(). > >BTW, is it safe to call memblock_free_late() in early boot stage, >because xbc_free_mem() will be called also from xbc_init(). >If not, we need a custom internal __xbc_exit() or xbc_cleanup() >which is called from xbc_init() and uses memblock_free(). No, memblock_free_late() can't be used early. Exit and Cleanup seem alike and are confusing. Maybe adding a early flag to _xbc_exit(bool early) is more clear. I will push a V2 for this. > >Thank you, > > >> Cc: Stable@xxxxxxxxxxxxxxx >> Signed-off-by: Qiang Zhang <qiang4.zhang@xxxxxxxxx> >> --- >> lib/bootconfig.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/lib/bootconfig.c b/lib/bootconfig.c >> index c59d26068a64..4524ee944df0 100644 >> --- a/lib/bootconfig.c >> +++ b/lib/bootconfig.c >> @@ -63,7 +63,7 @@ static inline void * __init xbc_alloc_mem(size_t size) >> >> static inline void __init xbc_free_mem(void *addr, size_t size) >> { >> - memblock_free(addr, size); >> + memblock_free_late(__pa(addr), size); >> } >> >> #else /* !__KERNEL__ */ >> -- >> 2.39.2 >> >> > > >-- >Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>