On Thu, Jul 14, 2022 at 11:40:46AM +0800, Ming Lei wrote:
On Mon, Jul 11, 2022 at 09:22:54PM +0300, Sagi Grimberg wrote:
> > > Use the leftover space to carve 'next' field that enables linking of
> > > io_uring_cmd structs. Also introduce a list head and few helpers.
> > >
> > > This is in preparation to support nvme-mulitpath, allowing multiple
> > > uring passthrough commands to be queued.
> >
> > It's not clear to me why we need linking at that level?
>
> I think the attempt is to allow something like blk_steal_bios that
> nvme leverages for io_uring_cmd(s).
I'll rephrase because now that I read it, I think my phrasing is
confusing.
I think the attempt is to allow something like blk_steal_bios that
nvme leverages, but for io_uring_cmd(s). Essentially allow io_uring_cmd
to be linked in a requeue_list.
io_uring_cmd is 1:1 with pt request, so I am wondering why not retry on
io_uring_cmd instance directly via io_uring_cmd_execute_in_task().
I feels it isn't necessary to link io_uring_cmd into list.
If path is not available, retry is not done immediately rather we wait for
path to be available (as underlying controller may still be
resetting/connecting). List helped as command gets added into
it (and submitter/io_uring gets the control back), and retry is done
exact point in time.
But yes, it won't harm if we do couple of retries even if path is known
not to be available (somewhat like iopoll). As this situation is
not common. And with that scheme, we don't have to link io_uring_cmd.
Sagi: does this sound fine to you?