Re: [PATCH V4 4/8] io_uring: support SQE group

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 06, 2024 at 10:26:50PM +0800, Ming Lei wrote:
> On Tue, Aug 06, 2024 at 04:38:08PM +0800, Ming Lei wrote:
> > On Mon, Jul 29, 2024 at 02:58:58PM +0100, Pavel Begunkov wrote:
> > > On 7/25/24 11:33, Ming Lei wrote:
> > > > On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
> > > > > On 7/23/24 15:34, Ming Lei wrote:
> > > ...
> > > > > > But grp_refs is dropped after io-wq request reference drops to
> > > > > > zero, then both io-wq and nor-io-wq code path can be unified
> > > > > > wrt. dealing with grp_refs, meantime it needn't to be updated
> > > > > > in extra(io-wq) context.
> > > > > 
> > > > > Let's try to describe how it can work. First, I'm only describing
> > > > > the dep mode for simplicity. And for the argument's sake we can say
> > > > > that all CQEs are posted via io_submit_flush_completions.
> > > > > 
> > > > > io_req_complete_post() {
> > > > > 	if (flags & GROUP) {
> > > > > 		req->io_task_work.func = io_req_task_complete;
> > > > > 		io_req_task_work_add(req);
> > > > > 		return;
> > > > > 	}
> > > > > 	...
> > > > > }
> > > > 
> > > > OK.
> > > > 
> > > > io_wq_free_work() still need to change to not deal with
> > > > next link & ignoring skip_cqe, because group handling(
> > > 
> > > No, it doesn't need to know about all that.
> > > 
> > > > cqe posting, link advance) is completely moved into
> > > > io_submit_flush_completions().
> > > 
> > > It has never been guaranteed that io_req_complete_post()
> > > will be the one completing the request,
> > > io_submit_flush_completions() can always happen.
> > > 
> > > 
> > > struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> > > {
> > > 	...
> > > 	if (req_ref_put_and_test(req)) {
> > > 		nxt = io_req_find_next(req);
> > > 		io_free_req();
> > > 	}
> > > }
> > > 
> > > We queue linked requests only when all refs are dropped, and
> > > the group handling in my snippet is done before we drop the
> > > owner's reference.
> > > 
> > > IOW, you won't hit io_free_req() in io_wq_free_work() for a
> > > leader unless all members in its group got completed and
> > > the leader already went through the code dropping those shared
> > > ublk buffers.
> > 
> > If io_free_req() won't be called for leader, leader won't be added
> > to ->compl_reqs, and it has to be generated when all members are
> > completed in __io_submit_flush_completions().
> > 
> > For !io_wq, we can align to this way by not completing leader in
> > io_req_complete_defer().
> > 
> > The above implementation looks simpler, and more readable.
> 
> Thinking of this issue further, looks the above is still not doable:
> 
> 1) for avoiding to hit io_free_req(), extra req->refs has to be grabbed,
> then the leader's completion may not be notified.
> 
> 2) 1) may be avoided by holding one leader's refcount for each member,
> and call req_ref_put_and_test(leader) when leader or member is
> completed, and post leader's CQE when leader's refs drops to zero.
> But there are other issues:
> 
> 	- other req_ref_inc_not_zero() or req_ref_get() may cause leader's
> 	CQE post missed
> 
> 	- the req_ref_put_and_test() in io_free_batch_list() can be called
> 	on group leader unexpectedly.
> 
> both 1) and 2) need to touch io_req_complete_defer() for completing group
> leader

oops, looks I missed the point of completing request from
io_req_complete_post() directly in the following link:

https://lore.kernel.org/linux-block/0fa0c9b9-cfb9-4710-85d0-2f6b4398603c@xxxxxxxxx/

Please ignore yesterday's replies, and now I should get the whole
picture of your point.

Thanks,
Ming





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux