We have invariant in io_read() of how much we're trying to read spilled into an iter and io_size variable. The last one controls decision making about whether to do read-retries. However, io_size is modified only after the first read attempt, so if we happen to go for a third retry in a single call to io_read(), we will get io_size greater than in the iterator, so may lead to various side effects up to live-locking. Modify io_size each time. Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- fs/io_uring.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index f8492d62b6a1..3e648c0e6b8d 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3551,13 +3551,10 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, } else if (ret <= 0 || ret == io_size) { /* make sure -ERESTARTSYS -> -EINTR is done */ goto done; - } else { + } else if (!force_nonblock || (req->file->f_flags & O_NONBLOCK) || + !(req->flags & REQ_F_ISREG)) { /* we did blocking attempt. no retry. */ - if (!force_nonblock || (req->file->f_flags & O_NONBLOCK) || - !(req->flags & REQ_F_ISREG)) - goto done; - - io_size -= ret; + goto done; } ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true); @@ -3570,6 +3567,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, /* now use our persistent iterator, if we aren't already */ iter = &rw->iter; retry: + io_size -= ret; rw->bytes_done += ret; /* if we can retry, do so with the callbacks armed */ if (!io_rw_should_retry(req)) { -- 2.24.0