Hi Alexei Starovoitov and bpf expert, Greeting! There is KASAN: slab-use-after-free Read in arena_vm_close in v6.10-rc3 kernel. All detailed info:https://github.com/xupengfe/syzkaller_logs/tree/main/240613_011057_arena_vm_close Syzkaller reproduced code: https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/repro.c Syzkaller syscall repro steps:https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/repro.prog Syzkaller report: https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/repro.report Kconfig(make olddefconfig): https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/kconfig_origin Bisect info: https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/bisect_info.log Issue dmesg: https://github.com/xupengfe/syzkaller_logs/blob/main/240613_011057_arena_vm_close/83a7eefedc9b56fe7bfeff13b6c7356688ffa670_dmesg.log v6.10-rc3 bzImage: https://github.com/xupengfe/syzkaller_logs/raw/main/240613_011057_arena_vm_close/bzImage_83a7eefedc9b56fe7bfeff13b6c7356688ffa670.tar.gz Bisected and found the first bad commit: 317460317a02 bpf: Introduce bpf_arena. [ 25.142953] ================================================================== [ 25.143738] BUG: KASAN: slab-use-after-free in arena_vm_close+0x1b1/0x1d0 [ 25.144474] Read of size 8 at addr ffff88800d3c93c8 by task repro/728 [ 25.145091] [ 25.145266] CPU: 0 PID: 728 Comm: repro Not tainted 6.10.0-rc3-83a7eefedc9b #1 [ 25.145942] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 [ 25.147003] Call Trace: [ 25.147256] <TASK> [ 25.147482] dump_stack_lvl+0xea/0x150 [ 25.147883] print_report+0xce/0x610 [ 25.148267] ? arena_vm_close+0x1b1/0x1d0 [ 25.148668] ? kasan_complete_mode_report_info+0x80/0x200 [ 25.149168] ? arena_vm_close+0x1b1/0x1d0 [ 25.149555] kasan_report+0xcc/0x110 [ 25.149904] ? arena_vm_close+0x1b1/0x1d0 [ 25.150246] __asan_report_load8_noabort+0x18/0x20 [ 25.150616] arena_vm_close+0x1b1/0x1d0 [ 25.150922] ? __pfx_arena_vm_close+0x10/0x10 [ 25.151266] remove_vma+0x95/0x190 [ 25.151552] exit_mmap+0x4bf/0xb00 [ 25.151834] ? __pfx_exit_mmap+0x10/0x10 [ 25.152110] ? __kasan_check_write+0x18/0x20 [ 25.152405] ? __pfx___mutex_unlock_slowpath+0x10/0x10 [ 25.152768] ? mutex_unlock+0x16/0x20 [ 25.153024] __mmput+0xde/0x3e0 [ 25.153262] mmput+0x74/0x90 [ 25.153471] do_exit+0x9fb/0x29f0 [ 25.153705] ? lock_release+0x418/0x840 [ 25.153983] ? __pfx_do_exit+0x10/0x10 [ 25.154239] ? __this_cpu_preempt_check+0x21/0x30 [ 25.154561] ? _raw_spin_unlock_irq+0x2c/0x60 [ 25.154858] ? lockdep_hardirqs_on+0x89/0x110 [ 25.155157] do_group_exit+0xe4/0x2c0 [ 25.155413] __x64_sys_exit_group+0x4d/0x60 [ 25.155697] x64_sys_call+0x1a1f/0x20d0 [ 25.155965] do_syscall_64+0x6d/0x140 [ 25.156220] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 25.156566] RIP: 0033:0x7f1343d18a4d [ 25.156817] Code: Unable to access opcode bytes at 0x7f1343d18a23. [ 25.157221] RSP: 002b:00007ffdc66dc268 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 [ 25.157713] RAX: ffffffffffffffda RBX: 00007f1343df69e0 RCX: 00007f1343d18a4d [ 25.158174] RDX: 00000000000000e7 RSI: ffffffffffffff80 RDI: 0000000000000000 [ 25.158634] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000020 [ 25.159092] R10: 00007ffdc66dc110 R11: 0000000000000246 R12: 00007f1343df69e0 [ 25.159557] R13: 00007f1343dfbf00 R14: 0000000000000001 R15: 00007f1343dfbee8 [ 25.160023] </TASK> [ 25.160176] [ 25.160287] Allocated by task 728: [ 25.160519] kasan_save_stack+0x2c/0x60 [ 25.160782] kasan_save_track+0x18/0x40 [ 25.161043] kasan_save_alloc_info+0x3c/0x50 [ 25.161330] __kasan_kmalloc+0x88/0xa0 [ 25.161588] kmalloc_trace_noprof+0x1b9/0x3c0 [ 25.161891] arena_map_mmap+0x232/0x7a0 [ 25.162156] bpf_map_mmap+0x4b5/0x9a0 [ 25.162412] mmap_region+0x5f7/0x2740 [ 25.162666] do_mmap+0xd6a/0x11a0 [ 25.162898] vm_mmap_pgoff+0x1ea/0x390 [ 25.163155] ksys_mmap_pgoff+0x3e8/0x530 [ 25.163425] __x64_sys_mmap+0x139/0x1d0 [ 25.163691] x64_sys_call+0x1922/0x20d0 [ 25.163950] do_syscall_64+0x6d/0x140 [ 25.164200] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 25.164534] [ 25.164647] Freed by task 728: [ 25.164854] kasan_save_stack+0x2c/0x60 [ 25.165118] kasan_save_track+0x18/0x40 [ 25.165389] kasan_save_free_info+0x3f/0x60 [ 25.165668] __kasan_slab_free+0x115/0x1a0 [ 25.165946] kfree+0xfe/0x330 [ 25.166157] arena_vm_close+0x15e/0x1d0 [ 25.166420] remove_vma+0x95/0x190 [ 25.166654] do_vmi_align_munmap+0xc02/0x11f0 [ 25.166949] do_vmi_munmap+0x22c/0x420 [ 25.167206] __do_sys_mremap+0x7db/0x1830 [ 25.167481] __x64_sys_mremap+0xc7/0x150 [ 25.167755] x64_sys_call+0x1c50/0x20d0 [ 25.168014] do_syscall_64+0x6d/0x140 [ 25.168265] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 25.168601] [ 25.168712] The buggy address belongs to the object at ffff88800d3c93c0 [ 25.168712] which belongs to the cache kmalloc-32 of size 32 [ 25.169492] The buggy address is located 8 bytes inside of [ 25.169492] freed 32-byte region [ffff88800d3c93c0, ffff88800d3c93e0) [ 25.170253] [ 25.170366] The buggy address belongs to the physical page: [ 25.170726] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xd3c9 [ 25.171234] flags: 0xfffffc0000000(node=0|zone=1|lastcpupid=0x1fffff) [ 25.171656] page_type: 0xffffefff(slab) [ 25.171923] raw: 000fffffc0000000 ffff88800a041780 ffffea000028cc40 dead000000000002 [ 25.172421] raw: 0000000000000000 0000000080400040 00000001ffffefff 0000000000000000 [ 25.172914] page dumped because: kasan: bad access detected [ 25.173273] [ 25.173383] Memory state around the buggy address: [ 25.173697] ffff88800d3c9280: 00 00 00 fc fc fc fc fc 00 00 00 fc fc fc fc fc [ 25.174162] ffff88800d3c9300: fa fb fb fb fc fc fc fc 00 00 01 fc fc fc fc fc [ 25.174628] >ffff88800d3c9380: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc [ 25.175092] ^ [ 25.175456] ffff88800d3c9400: 00 00 00 fc fc fc fc fc 00 00 04 fc fc fc fc fc [ 25.175921] ffff88800d3c9480: 00 00 01 fc fc fc fc fc 00 00 07 fc fc fc fc fc [ 25.176393] ================================================================== [ 25.177020] Disabling lock debugging due to kernel taint [ 25.177414] Oops: general protection fault, probably for non-canonical address 0xe095fc1e8000005c: 0000 [#1] PREEMPT SMP KASAN NOPTI [ 25.178167] KASAN: maybe wild-memory-access in range [0x04b000f4000002e0-0x04b000f4000002e7] I hope it's helpful. Thanks! --- If you don't need the following environment to reproduce the problem or if you already have one reproduced environment, please ignore the following information. How to reproduce: git clone https://gitlab.com/xupengfe/repro_vm_env.git cd repro_vm_env tar -xvf repro_vm_env.tar.gz cd repro_vm_env; ./start3.sh // it needs qemu-system-x86_64 and I used v7.1.0 // start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel // You could change the bzImage_xxx as you want // Maybe you need to remove line "-drive if=pflash,format=raw,readonly=on,file=./OVMF_CODE.fd \" for different qemu version You could use below command to log in, there is no password for root. ssh -p 10023 root@localhost After login vm(virtual machine) successfully, you could transfer reproduced binary to the vm by below way, and reproduce the problem in vm: gcc -pthread -o repro repro.c scp -P 10023 repro root@localhost:/root/ Get the bzImage for target kernel: Please use target kconfig and copy it to kernel_src/.config make olddefconfig make -jx bzImage //x should equal or less than cpu num your pc has Fill the bzImage file into above start3.sh to load the target kernel in vm. Tips: If you already have qemu-system-x86_64, please ignore below info. If you want to install qemu v7.1.0 version: git clone https://github.com/qemu/qemu.git cd qemu git checkout -f v7.1.0 mkdir build cd build yum install -y ninja-build.x86_64 yum -y install libslirp-devel.x86_64 ../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl --enable-usb-redir --enable-slirp make make install Best Regards, Thanks!