On 09/12/2020 20:17, Pavel Begunkov wrote: > On 08/12/2020 21:10, Jens Axboe wrote: >> On 12/8/20 12:24 PM, Pavel Begunkov wrote: >>> On 08/12/2020 19:17, Jens Axboe wrote: >>>> On 12/8/20 12:12 PM, Pavel Begunkov wrote: >>>>> On 07/12/2020 16:28, Jens Axboe wrote: >>>>>> On Sun, Dec 6, 2020 at 3:26 PM Pavel Begunkov <asml.silence@xxxxxxxxx> wrote: >>>>>>> From: Xiaoguang Wang <xiaoguang.wang@xxxxxxxxxxxxxxxxx> >>>>>>> >>>>>>> The reason is that once we got a non EAGAIN error in io_wq_submit_work(), >>>>>>> we'll complete req by calling io_req_complete(), which will hold completion_lock >>>>>>> to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't >>>>>>> hold completion_lock to call io_commit_cqring(), then there maybe concurrent >>>>>>> access to ctx->defer_list, double free may happen. >>>>>>> >>>>>>> To fix this bug, we always let io_iopoll_complete() complete polled io. >>>>>> >>>>>> This patch is causing hangs with iopoll testing, if you end up getting >>>>>> -EAGAIN on request submission. I've dropped it. >>>>> >>>>> I fail to understand without debugging how does it happen, especially since >>>>> it shouldn't even get out of the while in io_wq_submit_work(). Is that >>>>> something obvious I've missed? >>>> >>>> I didn't have time to look into it, and haven't yet, just reporting thation. >>>> it very reliably fails (and under what conditions). >>> >>> Yeah, I get it, asked just in case. >>> I'll see what's going on if Xiaoguang wouldn't handle it before. >> >> Should be trivial to reproduce on eg nvme by doing: >> >> echo mq-deadline > /sys/block/nvme0n1/queue/scheduler >> echo 2 > /sys/block/nvme0n1/queue/nr_requests >> >> and then run test/iopoll on that device. I'll try and take a look >> tomorrow unless someone beats me to it. > > Tried out with iopoll-enabled null_blk. test/iopoll fails with > "test_io_uring_submit_enters failed", but if remove iteration limit > from the test, it completes... eventually. > > premise: io_complete_rw_iopoll() gets -EAGAIN but returns 0 to > io_wq_submit_work(). > The old version happily completes IO with that 0, but the patch delays > it to do_iopoll(), which retries and so all that repeats. And, I believe, > that the behaviour that io_wq_submit_work()'s -EGAIN check was trying to > achieve... > > The question left is why no one progresses. May even be something in block. > Need to trace further. test_io_uring_submit_enters()'s io_uring_submit never goes into the kernel, IMHO it's saner to not expect to get any CQE, that's also implied in a comment above the function. I guess before we were getting blk-mq/etc. them back because of timers in blk-mq/etc. So I guess it should have been be more like the diff below, which still doesn't match the comment though. diff --git a/test/iopoll.c b/test/iopoll.c index d70ae56..d6f2f3e 100644 --- a/test/iopoll.c +++ b/test/iopoll.c @@ -269,13 +269,13 @@ static int test_io_uring_submit_enters(const char *file) /* submit manually to avoid adding IORING_ENTER_GETEVENTS */ ret = __sys_io_uring_enter(ring.ring_fd, __io_uring_flush_sq(&ring), 0, 0, NULL); - if (ret < 0) + if (ret != BUFFERS) goto err; for (i = 0; i < 500; i++) { - ret = io_uring_submit(&ring); - if (ret != 0) { - fprintf(stderr, "still had %d sqes to submit, this is unexpected", ret); + ret = io_uring_wait_cqe(&ring, &cqe); + if (ret < 0) { + fprintf(stderr, "wait cqe failed %i\n", ret); goto err; } -- Pavel Begunkov