Re: [PATCH 5.13 v2] io_uring: maintain drain requests' logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2021/4/8 下午8:22, Pavel Begunkov 写道:
On 08/04/2021 12:43, Hao Xu wrote:
在 2021/4/8 下午6:16, Hao Xu 写道:
在 2021/4/7 下午11:49, Jens Axboe 写道:
On 4/7/21 5:23 AM, Hao Xu wrote:
more tests comming, send this out first for comments.

Hao Xu (3):
    io_uring: add IOSQE_MULTI_CQES/REQ_F_MULTI_CQES for multishot requests
    io_uring: maintain drain logic for multishot requests
    io_uring: use REQ_F_MULTI_CQES for multipoll IORING_OP_ADD

   fs/io_uring.c                 | 34 +++++++++++++++++++++++++++++-----
   include/uapi/linux/io_uring.h |  8 +++-----
   2 files changed, 32 insertions(+), 10 deletions(-)

Let's do the simple cq_extra first. I don't see a huge need to add an
IOSQE flag for this, probably best to just keep this on a per opcode
basis for now, which also then limits the code path to just touching
poll for now, as nothing else supports multishot CQEs at this point.

gotcha.
a small issue here:
   sqe-->sqe(link)-->sqe(link)-->sqe(link, multishot)-->sqe(drain)

in the above case, assume the first 3 single-shot reqs have completed.
then I think the drian request won't be issued now unless the multishot request in the linkchain has been issued. The trick is: a multishot req
in a linkchain consumes cached_sq_head when io_get_sqe(), which means it
is counted in seq, but we will deduct the sqe when it is issued if we
want to do the job per opcode not in the main code path.
before the multishot req issued:
       all_sqes(4) - multishot_sqes(0) == all_cqes(3) - multishot_cqes(0)
after the multishot req issued:
       all_sqes(4) - multishot_sqes(1) == all_cqes(3) - multishot_cqes(0)

Sorry, my statement is wrong. It's not "won't be issued now unless the
multishot request in the linkchain has been issued". Actually I now
think the drain req won't be issued unless the multishot request in the
linkchain has completed. Because we may first check req_need_defer()
then issue(req->link), so:
    sqe0-->sqe1(link)-->sqe2(link)-->sqe3(link, multishot)-->sqe4(drain)

   sqe2 is completed:
     call req_need_defer:
     all_sqes(4) - multishot_sqes(0) == all_cqes(3) - multishot_cqes(0)
   sqe3 is issued:
     all_sqes(4) - multishot_sqes(1) == all_cqes(3) - multishot_cqes(0)
   sqe3 is completed:
     call req_need_defer:
     all_sqes(4) - multishot_sqes(1) == all_cqes(3) - multishot_cqes(0)

sqe4 shouldn't wait sqe3.

Do you mean it wouldn't if the patch is applied? Because any drain
request must wait for all requests submitted before to complete. And
so before issuing sqe4 it must wait for sqe3 __request__ to die, and
so for all sqe3's CQEs.

previously

Hi Pavel, the issue is what will happen after the patch being applied. The patch is to ignore all the multishot sqes and cqes. So by design,
sqe4 should wait for sqe0,1,2's completion, not sqe3's. But since we
implement it in per opcode place and don't touch the main code path, we
deduct a multishot sqe when issusing it(eg. call io_poll_add()).
So only when we issue sqe3, the equation is true:
    all_sqes(4) - multishot_sqes(1) == all_cqes(3) - multishot_cqes(0)
But at this point, we already missed
io_commit_cqring()-->__io_queue_deferred(), the next time __io_queue_deferred() being called is when sqe3 completed, so now sqe4
has waited for sqe3, this is not by design.

Regards,
Hao







[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux