On 9/8/23 3:30 AM, Ming Lei wrote: > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > index ad636954abae..95a3d31a1ef1 100644 > --- a/io_uring/io_uring.c > +++ b/io_uring/io_uring.c > @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > } > } > > + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > + issue_flags |= IO_URING_F_NONBLOCK; > + I think this comment deserves to be more descriptive. Normally we absolutely cannot block for polled IO, it's only OK here because io-wq is the issuer and not necessarily the poller of it. That generally falls upon the original issuer to poll these requests. I think this should be a separate commit, coming before the main fix which is below. > @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > finish_wait(&tctx->wait, &wait); > } while (1); > > + /* > + * Reap events from each ctx, otherwise these requests may take > + * resources and prevent other contexts from being moved on. > + */ > + xa_for_each(&tctx->xa, index, node) > + io_iopoll_try_reap_events(node->ctx); The main issue here is that if someone isn't polling for them, then we get to wait for a timeout before they complete. This can delay exit, for example, as we're now just waiting 30 seconds (or whatever the timeout is on the underlying device) for them to get timed out before exit can finish. Do we just want to move this a bit higher up where we iterate ctx's anyway? Not that important I suspect. -- Jens Axboe