On Mon, Oct 28, 2024 at 5:28 PM Jordan Rife <jrife@xxxxxxxxxx> wrote: > > > 1. Applied my patch from [1] to prevent any failures resulting from the > as-of-yet unpatched BPF code that uses call_rcu(). This lets us ... > [1]: https://lore.kernel.org/bpf/20241023145640.1499722-1-jrife@xxxxxxxxxx/ > [2]: https://lore.kernel.org/bpf/67121037.050a0220.10f4f4.000f.GAE@xxxxxxxxxx/ > [3]: https://syzkaller.appspot.com/x/repro.syz?x=153ef887980000 > > > [ 687.323615][T16276] ================================================================== > [ 687.325235][T16276] BUG: KFENCE: use-after-free read in __traceiter_sys_enter+0x30/0x50 > [ 687.325235][T16276] > [ 687.327193][T16276] Use-after-free read at 0xffff88807ec60028 (in kfence-#47): > [ 687.328404][T16276] __traceiter_sys_enter+0x30/0x50 > [ 687.329338][T16276] syscall_trace_enter+0x1ea/0x2b0 > [ 687.330021][T16276] do_syscall_64+0x1ec/0x250 > [ 687.330816][T16276] entry_SYSCALL_64_after_hwframe+0x77/0x7f > [ 687.331826][T16276] > [ 687.332291][T16276] kfence-#47: 0xffff88807ec60000-0xffff88807ec60057, size=88, cache=kmalloc-96 > [ 687.332291][T16276] > [ 687.334265][T16276] allocated by task 16281 on cpu 1 at 683.953385s (3.380878s ago): > [ 687.335615][T16276] tracepoint_add_func+0x28a/0xd90 > [ 687.336424][T16276] tracepoint_probe_register_prio_may_exist+0xa2/0xf0 > [ 687.337416][T16276] bpf_probe_register+0x186/0x200 > [ 687.338174][T16276] bpf_raw_tp_link_attach+0x21f/0x540 > [ 687.339233][T16276] __sys_bpf+0x393/0x4fa0 > [ 687.340042][T16276] __x64_sys_bpf+0x78/0xc0 > [ 687.340801][T16276] do_syscall_64+0xcb/0x250 > [ 687.341623][T16276] entry_SYSCALL_64_after_hwframe+0x77/0x7f I think the stack trace points out that the patch [1] isn't really fixing it. UAF is on access to bpf_link in __traceiter_sys_enter while your patch [1] and all attempts to "fix" were delaying bpf_prog. The issue is not reproducing anymore due to luck.