Re: [PATCH V2] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/1/23 9:12 AM, Ming Lei wrote:
> On Fri, Sep 01, 2023 at 08:47:28AM -0600, Jens Axboe wrote:
>> On 9/1/23 7:49 AM, Ming Lei wrote:
>>> io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
>>> in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
>>> Meantime io_wq IO code path may share resource with normal iopoll code
>>> path.
>>>
>>> So if any HIPRI request is submittd via io_wq, this request may not get resouce
>>> for moving on, given iopoll isn't possible in io_wq_put_and_exit().
>>>
>>> The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
>>> with default null_blk parameters.
>>>
>>> Fix it by always cancelling all requests in io_wq by adding helper of
>>> io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
>>> follows canceling requests immediately.
>>
>> This does look much cleaner, but the unconditional cancel_all == true
>> makes me a bit nervous in case the ring is being shared.
> 
> Here we just cancel requests in io_wq, which is per-task actually.

Ah yeah good point, it's just the tctx related bits.

> Yeah, ctx->iopoll_ctx could be shared, but if it is used in this way,
> the event can't be avoided to reap from remote context.
> 
>>
>> Do we really need to cancel these bits? Can't we get by with something
>> trivial like just stopping retrying if the original task is exiting?
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index c6d9e4677073..95316c0c3830 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1939,7 +1939,7 @@ void io_wq_submit_work(struct io_wq_work *work)
>>  		 * If REQ_F_NOWAIT is set, then don't wait or retry with
>>  		 * poll. -EAGAIN is final for that case.
>>  		 */
>> -		if (req->flags & REQ_F_NOWAIT)
>> +		if (req->flags & REQ_F_NOWAIT || req->task->flags & PF_EXITING)
>>  			break;
> 
> This way isn't enough, any request submitted to io_wq before do_exit()
> need to be reaped by io_iopoll_try_reap_events() explicitly.
> 
> Not mention IO_URING_F_NONBLOCK isn't set, so io_issue_sqe() may hang
> forever.

Yep it's not enough, and since we do only cancel per-task, I think this
patch looks fine as-is and is probably the right way to go. Thanks Ming.

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux