We should not assume anything about ->free_iov just from REQ_F_ASYNC_DATA but rather rely on REQ_F_NEED_CLEANUP, as we may allocate ->async_data but failed init would leave the field in not consistent state. The easiest solution is to remove removing REQ_F_NEED_CLEANUP and so ->async_data dealloc from io_sendrecv_fail() and let io_send_zc_cleanup() do the job. The catch here is that we also need to prevent double notif flushing, just test it for NULL and zero where it's needed. BUG: KASAN: use-after-free in io_sendrecv_fail+0x3b0/0x3e0 io_uring/net.c:1221 Write of size 8 at addr ffff8880771b4080 by task syz-executor.3/30199 CPU: 1 PID: 30199 Comm: syz-executor.3 Not tainted 6.0.0-rc6-next-20220923-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:284 [inline] print_report+0x15e/0x45d mm/kasan/report.c:395 kasan_report+0xbb/0x1f0 mm/kasan/report.c:495 io_sendrecv_fail+0x3b0/0x3e0 io_uring/net.c:1221 io_req_complete_failed+0x155/0x1b0 io_uring/io_uring.c:873 io_drain_req io_uring/io_uring.c:1648 [inline] io_queue_sqe_fallback.cold+0x29f/0x788 io_uring/io_uring.c:1931 io_submit_sqe io_uring/io_uring.c:2160 [inline] io_submit_sqes+0x1180/0x1df0 io_uring/io_uring.c:2276 __do_sys_io_uring_enter+0xac6/0x2410 io_uring/io_uring.c:3216 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Fixes: c4c0009e0b56e ("io_uring/net: combine fail handlers") Reported-by: syzbot+4c597a574a3f5a251bda@xxxxxxxxxxxxxxxxxxxxxxxxx Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- io_uring/net.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index 757a300578f4..e9e66bace45f 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -915,9 +915,11 @@ void io_send_zc_cleanup(struct io_kiocb *req) io = req->async_data; kfree(io->free_iov); } - zc->notif->flags |= REQ_F_CQE_SKIP; - io_notif_flush(zc->notif); - zc->notif = NULL; + if (zc->notif) { + zc->notif->flags |= REQ_F_CQE_SKIP; + io_notif_flush(zc->notif); + zc->notif = NULL; + } } int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) @@ -1215,12 +1217,6 @@ void io_sendrecv_fail(struct io_kiocb *req) io_notif_flush(sr->notif); sr->notif = NULL; } - if (req_has_async_data(req)) { - io = req->async_data; - kfree(io->free_iov); - io->free_iov = NULL; - } - req->flags &= ~REQ_F_NEED_CLEANUP; io_req_set_res(req, res, req->cqe.flags); } -- 2.37.2