Re: [PATCH] io_uring/uring_cmd: push IRQ based completions through task_work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 21, 2023 at 10:02 AM Kanchan Joshi <joshiiitr@xxxxxxxxx> wrote:
>
> On Tue, Mar 21, 2023 at 2:12 AM Jens Axboe <axboe@xxxxxxxxx> wrote:
> >
> > On 3/20/23 2:03?PM, Jens Axboe wrote:
> > > On 3/20/23 9:06?AM, Kanchan Joshi wrote:
> > >> On Sun, Mar 19, 2023 at 8:51?PM Jens Axboe <axboe@xxxxxxxxx> wrote:
> > >>>
> > >>> This is similar to what we do on the non-passthrough read/write side,
> > >>> and helps take advantage of the completion batching we can do when we
> > >>> post CQEs via task_work. On top of that, this avoids a uring_lock
> > >>> grab/drop for every completion.
> > >>>
> > >>> In the normal peak IRQ based testing, this increases performance in
> > >>> my testing from ~75M to ~77M IOPS, or an increase of 2-3%.
> > >>>
> > >>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
> > >>>
> > >>> ---
> > >>>
> > >>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> > >>> index 2e4c483075d3..b4fba5f0ab0d 100644
> > >>> --- a/io_uring/uring_cmd.c
> > >>> +++ b/io_uring/uring_cmd.c
> > >>> @@ -45,18 +45,21 @@ static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
> > >>>  void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2)
> > >>>  {
> > >>>         struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
> > >>> +       struct io_ring_ctx *ctx = req->ctx;
> > >>>
> > >>>         if (ret < 0)
> > >>>                 req_set_fail(req);
> > >>>
> > >>>         io_req_set_res(req, ret, 0);
> > >>> -       if (req->ctx->flags & IORING_SETUP_CQE32)
> > >>> +       if (ctx->flags & IORING_SETUP_CQE32)
> > >>>                 io_req_set_cqe32_extra(req, res2, 0);
> > >>> -       if (req->ctx->flags & IORING_SETUP_IOPOLL)
> > >>> +       if (ctx->flags & IORING_SETUP_IOPOLL) {
> > >>>                 /* order with io_iopoll_req_issued() checking ->iopoll_complete */
> > >>>                 smp_store_release(&req->iopoll_completed, 1);
> > >>> -       else
> > >>> -               io_req_complete_post(req, 0);
> > >>> +               return;
> > >>> +       }
> > >>> +       req->io_task_work.func = io_req_task_complete;
> > >>> +       io_req_task_work_add(req);
> > >>>  }
> > >>
> > >> Since io_uring_cmd_done itself would be executing in task-work often
> > >> (always in case of nvme), can this be further optimized by doing
> > >> directly what this new task-work (that is being set up here) would
> > >> have done?
> > >> Something like below on top of your patch -
> > >>
> > >> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> > >> index e1929f6e5a24..7a764e04f309 100644
> > >> --- a/io_uring/uring_cmd.c
> > >> +++ b/io_uring/uring_cmd.c
> > >> @@ -58,8 +58,12 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd,
> > >> ssize_t ret, ssize_t res2)
> > >>                 smp_store_release(&req->iopoll_completed, 1);
> > >>                 return;
> > >>         }
> > >> -       req->io_task_work.func = io_req_task_complete;
> > >> -       io_req_task_work_add(req);
> > >> +       if (in_task()) {
> > >> +               io_req_complete_defer(req);
> > >> +       } else {
> > >> +               req->io_task_work.func = io_req_task_complete;
> > >> +               io_req_task_work_add(req);
> > >> +       }
> > >>  }
> > >>  EXPORT_SYMBOL_GPL(io_uring_cmd_done);
> > >
> > > Good point, though I do think we should rework to pass in the flags
> > > instead. I'll take a look.
> >
> > Something like this, totally untested... And this may be more
> > interesting than it would appear, because the current:
> >
> >         io_req_complete_post(req, 0);
> >
> > in io_uring_cmd_done() is passing in that it has the CQ ring locked, but
> > that does not look like it's guaranteed? So this is more of a
> > correctness thing first and foremost, more so than an optimization.
> >
> > Hmm?
>
> When zero is passed to io_req_complete_post, it calls
> __io_req_complete_post() which takes CQ lock as the first thing.
> So the correct thing will happen. Am I missing something?

And because this CQ lock was there, optimization is able to improve the numbers.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux