6.7-stable review patch. If anyone has any objections, please let me know. ------------------ From: Pavel Begunkov <asml.silence@xxxxxxxxx> [ Upstream commit 1a8ec63b2b6c91caec87d4e132b1f71b5df342be ] With multiple poll entries __io_queue_proc() might be running in parallel with poll handlers and possibly task_work, we should not be carelessly modifying req->flags there. io_poll_double_prepare() handles a similar case with locking but it's much easier to move it into __io_arm_poll_handler(). Cc: stable@xxxxxxxxxxxxxxx Fixes: 595e52284d24a ("io_uring/poll: don't enable lazy wake for POLLEXCLUSIVE") Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> Link: https://lore.kernel.org/r/455cc49e38cf32026fa1b49670be8c162c2cb583.1709834755.git.asml.silence@xxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- io_uring/poll.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/io_uring/poll.c b/io_uring/poll.c index 58b7556f621eb..c6f4789623cb2 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -539,14 +539,6 @@ static void __io_queue_proc(struct io_poll *poll, struct io_poll_table *pt, poll->wait.private = (void *) wqe_private; if (poll->events & EPOLLEXCLUSIVE) { - /* - * Exclusive waits may only wake a limited amount of entries - * rather than all of them, this may interfere with lazy - * wake if someone does wait(events > 1). Ensure we don't do - * lazy wake for those, as we need to process each one as they - * come in. - */ - req->flags |= REQ_F_POLL_NO_LAZY; add_wait_queue_exclusive(head, &poll->wait); } else { add_wait_queue(head, &poll->wait); @@ -618,6 +610,17 @@ static int __io_arm_poll_handler(struct io_kiocb *req, if (issue_flags & IO_URING_F_UNLOCKED) req->flags &= ~REQ_F_HASH_LOCKED; + + /* + * Exclusive waits may only wake a limited amount of entries + * rather than all of them, this may interfere with lazy + * wake if someone does wait(events > 1). Ensure we don't do + * lazy wake for those, as we need to process each one as they + * come in. + */ + if (poll->events & EPOLLEXCLUSIVE) + req->flags |= REQ_F_POLL_NO_LAZY; + mask = vfs_poll(req->file, &ipt->pt) & poll->events; if (unlikely(ipt->error || !ipt->nr_entries)) { -- 2.43.0