On 14/01/2021 21:40, Pavel Begunkov wrote: > On 14/01/2021 15:50, Marcelo Diop-Gonzalez wrote: >> Right now io_flush_timeouts() checks if the current number of events >> is equal to ->timeout.target_seq, but this will miss some timeouts if >> there have been more than 1 event added since the last time they were >> flushed (possible in io_submit_flush_completions(), for example). Fix >> it by recording the last sequence at which timeouts were flushed so >> that the number of events seen can be compared to the number of events >> needed without overflow. > > Looks good, but there is a little change I'll ask you to make (see > below). In a meanwhile I'll test it, so the patch on the fast track. Tested, works good for me. Apart from the small change I described before Reviewed-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > >> >> Signed-off-by: Marcelo Diop-Gonzalez <marcelo827@xxxxxxxxx> >> --- >> fs/io_uring.c | 29 +++++++++++++++++++++++++---- >> 1 file changed, 25 insertions(+), 4 deletions(-) >> >> diff --git a/fs/io_uring.c b/fs/io_uring.c >> index 372be9caf340..71d8fa0733ad 100644 >> --- a/fs/io_uring.c >> +++ b/fs/io_uring.c >> @@ -354,6 +354,7 @@ struct io_ring_ctx { >> unsigned cq_entries; >> unsigned cq_mask; >> atomic_t cq_timeouts; >> + unsigned cq_last_tm_flush; >> unsigned long cq_check_overflow; >> struct wait_queue_head cq_wait; >> struct fasync_struct *cq_fasync; >> @@ -1639,19 +1640,36 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx) >> >> static void io_flush_timeouts(struct io_ring_ctx *ctx) >> { >> - while (!list_empty(&ctx->timeout_list)) { >> + u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); > > This assignment should go after list_empty() -- because of the atomic part > my compiler can't reshuffle them itself. > >> + >> + if (list_empty(&ctx->timeout_list)) >> + return; > [...] >> static void io_commit_cqring(struct io_ring_ctx *ctx) >> @@ -5837,6 +5855,9 @@ static int io_timeout(struct io_kiocb *req) >> tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); >> req->timeout.target_seq = tail + off; >> >> + /* Update the last seq here in case io_flush_timeouts() hasn't */ >> + ctx->cq_last_tm_flush = tail; > > Have to note that it's ok to do because we don't mix submissions and > completions, so io_timeout should never fall under same completion_lock > section as cq commit, > > but otherwise some future locked version of io_timeout would be cutting > off a part of the current flush window (i.e. this [last, cur] thing). > -- Pavel Begunkov