On Tue, Jan 12, 2021 at 08:47:11PM +0000, Pavel Begunkov wrote: > On 08/01/2021 15:57, Marcelo Diop-Gonzalez wrote: > > On Sat, Jan 02, 2021 at 08:26:26PM +0000, Pavel Begunkov wrote: > >> On 02/01/2021 19:54, Pavel Begunkov wrote: > >>> On 19/12/2020 19:15, Marcelo Diop-Gonzalez wrote: > >>>> Right now io_flush_timeouts() checks if the current number of events > >>>> is equal to ->timeout.target_seq, but this will miss some timeouts if > >>>> there have been more than 1 event added since the last time they were > >>>> flushed (possible in io_submit_flush_completions(), for example). Fix > >>>> it by recording the starting value of ->cached_cq_overflow - > >>>> ->cq_timeouts instead of the target value, so that we can safely > >>>> (without overflow problems) compare the number of events that have > >>>> happened with the number of events needed to trigger the timeout. > >> > >> https://www.spinics.net/lists/kernel/msg3475160.html > >> > >> The idea was to replace u32 cached_cq_tail with u64 while keeping > >> timeout offsets u32. Assuming that we won't ever hit ~2^62 inflight > >> requests, complete all requests falling into some large enough window > >> behind that u64 cached_cq_tail. > >> > >> simplifying: > >> > >> i64 d = target_off - ctx->u64_cq_tail > >> if (d <= 0 && d > -2^32) > >> complete_it() > >> > >> Not fond of it, but at least worked at that time. You can try out > >> this approach if you want, but would be perfect if you would find > >> something more elegant :) > >> > > > > What do you think about something like this? I think it's not totally > > correct because it relies on having ->completion_lock in io_timeout() so > > that ->cq_last_tm_flushed is updated, but in case of IORING_SETUP_IOPOLL, > > io_iopoll_complete() doesn't take that lock, and ->uring_lock will not > > be held if io_timeout() is called from io_wq_submit_work(), but maybe > > could still be worth it since that was already possibly a problem? > > > > diff --git a/fs/io_uring.c b/fs/io_uring.c > > index cb57e0360fcb..50984709879c 100644 > > --- a/fs/io_uring.c > > +++ b/fs/io_uring.c > > @@ -353,6 +353,7 @@ struct io_ring_ctx { > > unsigned cq_entries; > > unsigned cq_mask; > > atomic_t cq_timeouts; > > + unsigned cq_last_tm_flush; > > It looks like that "last flush" is a good direction. > I think there can be problems at extremes like completing 2^32 > requests at once, but should be ok in practice. Anyway better > than it's now. > > What about the first patch about overflows and cq_timeouts? I > assume that problem is still there, isn't it? Yeah it's still there I think, I just couldn't think of a good way to fix it. So I figured I would just send this one since at least it doesn't make that problem worse. Maybe could send a fix for that one later if I think of something > > See comments below, but if it passes liburing tests, please send > a patch. will do! > > > unsigned long cq_check_overflow; > > struct wait_queue_head cq_wait; > > struct fasync_struct *cq_fasync; > > @@ -1633,19 +1634,26 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx) > > > > static void io_flush_timeouts(struct io_ring_ctx *ctx) > > { > > + u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); > > + > > a nit, > > if (list_empty()) return; + do {} while(); > > timeouts can be rare enough > > > while (!list_empty(&ctx->timeout_list)) { > > + u32 events_needed, events_got; > > struct io_kiocb *req = list_first_entry(&ctx->timeout_list, > > struct io_kiocb, timeout.list); > > > > if (io_is_timeout_noseq(req)) > > break; > > - if (req->timeout.target_seq != ctx->cached_cq_tail > > - - atomic_read(&ctx->cq_timeouts)) > > + > > extra new line > > > + events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush; > > + events_got = seq - ctx->cq_last_tm_flush; > > + if (events_got < events_needed) > > probably <= Won't that make it break too early though? If you submit a timeout with off = 1 when {seq == 0, last_flush == 0}, then target_seq == 1. Then let's say there's 1 cqe added, so the timeout should trigger. Then events_needed == 1 and events_got == 1, right? > > > break; > > basically it checks that @target is in [last_flush, cur_seq], > it can use such a comment + a note about underflows and using > the modulus arithmetic, like with algebraic rings > > > > > list_del_init(&req->timeout.list); > > io_kill_timeout(req); > > } > > + > > + ctx->cq_last_tm_flush = seq; > > } > > > > static void io_commit_cqring(struct io_ring_ctx *ctx) > > > > -- > Pavel Begunkov