Re: [PATCH 1/6] io_uring: enhance flush completion logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/3/21 1:08 PM, Hao Xu wrote:
> 在 2021/9/3 下午7:42, Pavel Begunkov 写道:
>> On 9/3/21 12:00 PM, Hao Xu wrote:
>>> Though currently refcount of a req is always one when we flush inline
>>
>> It can be refcounted and != 1. E.g. poll requests, or consider
> It seems poll requests don't leverage comp cache, do I miss anything?

Hmm, looks so. Not great that it doesn't, but probably it's because
of trying to submit next reqs right in io_poll_task_func().

I'll be pushing for some changes around tw, with it should be easy
to hook poll completion batching with no drawbacks. Would be great
if you will be willing to take a shot on it.

>> that tw also flushes, and you may have a read that goes to apoll
>> and then get tw resubmitted from io_async_task_func(). And other
> when it goes to apoll, (say no double wait entry) ref is 1, then read
> completes inline and then the only ref is droped by flush.

Yep, but some might have double entry. It also will have elevated
refs if there was a linked timeout. Another case is io-wq, which
takes a reference, and if iowq doesn't put it for a while (e.g. got
rescheduled) but requests goes to tw (resubmit, poll, whatever)
you have the same picture.

>> cases.
>>
>>> completions, but still a chance there will be exception in the future.
>>> Enhance the flush logic to make sure we maintain compl_nr correctly.
>>
>> See below
>>
>>>
>>> Signed-off-by: Hao Xu <haoxu@xxxxxxxxxxxxxxxxx>
>>> ---
>>>
>>> we need to either removing the if check to claim clearly that the req's
>>> refcount is 1 or adding this patch's logic.
>>>
>>>   fs/io_uring.c | 6 ++++--
>>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index 2bde732a1183..c48d43207f57 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -2291,7 +2291,7 @@ static void io_submit_flush_completions(struct io_ring_ctx *ctx)
>>>       __must_hold(&ctx->uring_lock)
>>>   {
>>>       struct io_submit_state *state = &ctx->submit_state;
>>> -    int i, nr = state->compl_nr;
>>> +    int i, nr = state->compl_nr, remain = 0;
>>>       struct req_batch rb;
>>>         spin_lock(&ctx->completion_lock);
>>> @@ -2311,10 +2311,12 @@ static void io_submit_flush_completions(struct io_ring_ctx *ctx)
>>>             if (req_ref_put_and_test(req))
>>>               io_req_free_batch(&rb, req, &ctx->submit_state);
>>> +        else
>>> +            state->compl_reqs[remain++] = state->compl_reqs[i];
>>
>> Our ref is dropped, and something else holds another reference. That
>> "something else" is responsible to free the request once it put the last
>> reference. This chunk would make the following io_submit_flush_completions()
>> to underflow refcount and double free.
> True, I see. Thanks Pavel.
>>
>>>       }
>>>         io_req_free_batch_finish(ctx, &rb);
>>> -    state->compl_nr = 0;
>>> +    state->compl_nr = remain;
>>>   }
>>>     /*
>>>
>>
> 

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux