On Mon, Sep 25, 2023 at 05:17:10PM -0400, Stefan Hajnoczi wrote: > On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote: > > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote: > > > > On 9/8/23 8:34 AM, Ming Lei wrote: > > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote: > > > > >> On 9/8/23 3:30 AM, Ming Lei wrote: > > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c > > > > >>> index ad636954abae..95a3d31a1ef1 100644 > > > > >>> --- a/io_uring/io_uring.c > > > > >>> +++ b/io_uring/io_uring.c > > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work) > > > > >>> } > > > > >>> } > > > > >>> > > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */ > > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) > > > > >>> + issue_flags |= IO_URING_F_NONBLOCK; > > > > >>> + > > > > >> > > > > >> I think this comment deserves to be more descriptive. Normally we > > > > >> absolutely cannot block for polled IO, it's only OK here because io-wq > > > > > > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make REQ_POLLED > > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to > > > > > io_uring. > > > > > > > > > >> is the issuer and not necessarily the poller of it. That generally falls > > > > >> upon the original issuer to poll these requests. > > > > >> > > > > >> I think this should be a separate commit, coming before the main fix > > > > >> which is below. > > > > > > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the > > > > > approach in V2 doesn't need this change. > > > > > > > > > >> > > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd) > > > > >>> finish_wait(&tctx->wait, &wait); > > > > >>> } while (1); > > > > >>> > > > > >>> + /* > > > > >>> + * Reap events from each ctx, otherwise these requests may take > > > > >>> + * resources and prevent other contexts from being moved on. > > > > >>> + */ > > > > >>> + xa_for_each(&tctx->xa, index, node) > > > > >>> + io_iopoll_try_reap_events(node->ctx); > > > > >> > > > > >> The main issue here is that if someone isn't polling for them, then we > > > > > > > > > > That is actually what this patch is addressing, :-) > > > > > > > > Right, that part is obvious :) > > > > > > > > >> get to wait for a timeout before they complete. This can delay exit, for > > > > >> example, as we're now just waiting 30 seconds (or whatever the timeout > > > > >> is on the underlying device) for them to get timed out before exit can > > > > >> finish. > > > > > > > > > > For the issue on null_blk, device timeout handler provides > > > > > forward-progress, such as requests are released, so new IO can be > > > > > handled. > > > > > > > > > > However, not all devices support timeout, such as virtio device. > > > > > > > > That's a bug in the driver, you cannot sanely support polled IO and not > > > > be able to deal with timeouts. Someone HAS to reap the requests and > > > > there are only two things that can do that - the application doing the > > > > polled IO, or if that doesn't happen, a timeout. > > > > > > OK, then device driver timeout handler has new responsibility of covering > > > userspace accident, :-) > > Sorry, I don't have enough context so this is probably a silly question: > > When an application doesn't reap a polled request, why doesn't the block > layer take care of this in a generic way? I don't see anything > driver-specific about this. block layer doesn't have knowledge to handle that, io_uring knows the application is exiting, and can help to reap the events. But the big question is that if there is really IO timeout for virtio-blk. If there is, the reap done in io_uring may never return and cause other issue, so if it is done in io_uring, that can be just thought as sort of improvement. The real bug fix is still in device driver, usually only the driver timeout handler can provide forward progress guarantee. > > Driver-specific behavior would be sending an abort/cancel upon timeout. > virtio-blk cannot do that because there is no such command in the device > specification at the moment. So simply waiting for the polled request to > complete is the only thing that can be done (aside from resetting the > device), and it's generic behavior. Then looks not safe to support IO polling for virtio-blk, maybe disable it at default now until the virtio-blk spec starts to support IO abort? Thanks, Ming