Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 08, 2023 at 04:46:15PM +0100, Pavel Begunkov wrote:
> On 9/8/23 14:49, Jens Axboe wrote:
> > On 9/8/23 3:30 AM, Ming Lei wrote:
> > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > index ad636954abae..95a3d31a1ef1 100644
> > > --- a/io_uring/io_uring.c
> > > +++ b/io_uring/io_uring.c
> > > @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> > >   		}
> > >   	}
> > > +	/* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> > > +	if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> > > +		issue_flags |= IO_URING_F_NONBLOCK;
> > > +
> > 
> > I think this comment deserves to be more descriptive. Normally we
> > absolutely cannot block for polled IO, it's only OK here because io-wq
> > is the issuer and not necessarily the poller of it. That generally falls
> > upon the original issuer to poll these requests.
> > 
> > I think this should be a separate commit, coming before the main fix
> > which is below.
> > 
> > > @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
> > >   		finish_wait(&tctx->wait, &wait);
> > >   	} while (1);
> > > +	/*
> > > +	 * Reap events from each ctx, otherwise these requests may take
> > > +	 * resources and prevent other contexts from being moved on.
> > > +	 */
> > > +	xa_for_each(&tctx->xa, index, node)
> > > +		io_iopoll_try_reap_events(node->ctx);
> > 
> > The main issue here is that if someone isn't polling for them, then we
> > get to wait for a timeout before they complete. This can delay exit, for
> > example, as we're now just waiting 30 seconds (or whatever the timeout
> > is on the underlying device) for them to get timed out before exit can
> > finish.
> 
> Ok, our case is that userspace crashes and doesn't poll for its IO.
> How would that block io-wq termination? We send a signal and workers
> should exit, either by queueing up the request for iopoll (and then

It depends on how userspace handles the signal, such as, t/io_uring,
s->finish is set as true in INT signal handler, two cases may happen:

1) s->finish is observed immediately, then this pthread exits, and leave
polled requests in ctx->iopoll_list

2) s->finish isn't observed immediately, and just submit & polling;
if any IO can't be submitted because of no enough resource, there can
be one busy spin because submitter_uring_fn() waits for inflight IO.

So if there are two pthreads(A, B), each setup its own io_uring context
and submit & poll IO on same block device.  If 1) happens in A, all
device tags can be held for nothing.  If 2) happens in B, the busy spin
prevents exit() of this pthread B.

Then the hang is caused, exit work can't be scheduled at all, because
pthread B doesn't exit.

> we queue it into the io_uring iopoll list and the worker immediately
> returns back and presumably exits), or it fails because of the signal
> and returns back.
> 
> That should kill all io-wq and make exit go forward. Then the io_uring
> file will be destroyed and the ring exit work will be polling via
> 
> io_ring_exit_work();
> -- io_uring_try_cancel_requests();
>   -- io_iopoll_try_reap_events();
> 
> What I'm missing? Does the blocking change make io-wq iopolling
> completions inside the block? Was it by any chance with the recent
> "do_exit() waiting for ring destruction" patches?

In short, it is one resource dependency issue for polled IO.


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux