On 1/16/2020 8:22 AM, Jens Axboe wrote:
On 1/15/20 9:42 PM, Jens Axboe wrote:
On 1/15/20 9:34 PM, Jens Axboe wrote:
On 1/15/20 7:37 PM, Bijan Mottahedeh wrote:
io_issue_sqe() calls io_iopoll_req_issued() which manipulates poll_list,
so acquire ctx->uring_lock beforehand similar to other instances of
calling io_issue_sqe().
Is the below not enough?
This should be better, we have two that set ->in_async, and only one
doesn't hold the mutex.
If this works for you, can you resend patch 2 with that? Also add a:
Fixes: 8a4955ff1cca ("io_uring: sqthread should grab ctx->uring_lock for submissions")
to it as well. Thanks!
I tested and queued this up:
https://git.kernel.dk/cgit/linux-block/commit/?h=io_uring-5.5&id=11ba820bf163e224bf5dd44e545a66a44a5b1d7a
Please let me know if this works, it sits on top of the ->result patch you
sent in.
That works, thanks.
I'm however still seeing a use-after-free error in the request
completion path in nvme_unmap_data(). It happens only when testing with
large block sizes in fio, typically > 128k, e.g. bs=256k will always hit it.
This is the error:
DMA-API: nvme 0000:00:04.0: device driver tries to free DMA memory it
has not allocated [device address=0x6b6b6b6b6b6b6b6b] [size=1802201963
bytes]
and this warning occasionally:
WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IDLE);
It seems like a request might be issued multiple times but I can't see
anything in io_uring code that would account for it.
--bijan