Re: [RFC 2/2] io_uring: acquire ctx->uring_lock before calling io_issue_sqe()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/16/20 2:04 PM, Bijan Mottahedeh wrote:
> On 1/16/2020 12:02 PM, Jens Axboe wrote:
>> On 1/16/20 12:08 PM, Bijan Mottahedeh wrote:
>>> On 1/16/2020 8:22 AM, Jens Axboe wrote:
>>>> On 1/15/20 9:42 PM, Jens Axboe wrote:
>>>>> On 1/15/20 9:34 PM, Jens Axboe wrote:
>>>>>> On 1/15/20 7:37 PM, Bijan Mottahedeh wrote:
>>>>>>> io_issue_sqe() calls io_iopoll_req_issued() which manipulates poll_list,
>>>>>>> so acquire ctx->uring_lock beforehand similar to other instances of
>>>>>>> calling io_issue_sqe().
>>>>>> Is the below not enough?
>>>>> This should be better, we have two that set ->in_async, and only one
>>>>> doesn't hold the mutex.
>>>>>
>>>>> If this works for you, can you resend patch 2 with that? Also add a:
>>>>>
>>>>> Fixes: 8a4955ff1cca ("io_uring: sqthread should grab ctx->uring_lock for submissions")
>>>>>
>>>>> to it as well. Thanks!
>>>> I tested and queued this up:
>>>>
>>>> https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.5&id=11ba820bf163e224bf5dd44e545a66a44a5b1d7a
>>>>
>>>> Please let me know if this works, it sits on top of the ->result patch you
>>>> sent in.
>>>>
>>> That works, thanks.
>>>
>>> I'm however still seeing a use-after-free error in the request
>>> completion path in nvme_unmap_data().  It happens only when testing with
>>> large block sizes in fio, typically > 128k, e.g. bs=256k will always hit it.
>>>
>>> This is the error:
>>>
>>> DMA-API: nvme 0000:00:04.0: device driver tries to free DMA memory it
>>> has not allocated [device address=0x6b6b6b6b6b6b6b6b] [size=1802201963
>>> bytes]
>>>
>>> and this warning occasionally:
>>>
>>> WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IDLE);
>>>
>>> It seems like a request might be issued multiple times but I can't see
>>> anything in io_uring code that would account for it.
>> Both of them indicate reuse, and I agree I don't think it's io_uring. It
>> really feels like an issue with nvme when a poll queue is shared, but I
>> haven't been able to pin point what it is yet.
>>
>> The 128K is interesting, that would seem to indicate that it's related to
>> splitting of the IO (which would create > 1 IO per submitted IO).
>>
> Where does the split take place?  I had suspected that it might be 
> related to the submit_bio() loop in __blkdev_direct_IO() but I don't 
> think I saw multiple submit_bio() calls or maybe I missed something.

See the path from blk_mq_make_request() -> __blk_queue_split() ->
blk_bio_segment_split(). The bio is built and submitted, then split if
it violates any size constraints. The splits are submitted through
generic_make_request(), so that might be why you didn't see multiple
submit_bio() calls.

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux