Re: [PATCH] io_uring/poll: use IOU_F_TWQ_LAZY_WAKE for wakeups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/8/23 16:14, Jens Axboe wrote:
In the spirit of getting some closure/progress on this, how about this
for starters? Disables lazy wake for poll_exclusive, and provides a flag
that can otherwise be set to disable it as well.

I'm not that concerned about the non-multishot accept case specifically,
if anything we can let it slide by saying that backlog=1 is unreasonable
there.

A bigger problem is that for the purpose of counting nr_wait passed into
the waiting syscall, users must never assume that a multishot request
can produce more than 1 completion.

For example this is not allowed:

queue_multishot_{rcv,accept}();
cq_wait(2);

So we can just leave the enablement patch alone and say it's the only
reasonable behaviour, and it was the indented way from the beginning
(hoping nobody will complain about it). Or do it via a flag, perhaps
SETUP_*.

For the multishot part I described the rules above. As for the problem
in general, it come from interdependecies b/w requests, so the rule is
the vague "there should be no deps b/w requests", but I'm not sure we
can spell at the moment the precise semantics.


diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index d3009d56af0b..03401c6ce5bb 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -431,6 +431,7 @@ enum {
  	/* keep async read/write and isreg together and in order */
  	REQ_F_SUPPORT_NOWAIT_BIT,
  	REQ_F_ISREG_BIT,
+	REQ_F_POLL_NO_LAZY_BIT,
/* not a real bit, just to check we're not overflowing the space */
  	__REQ_F_LAST_BIT,
@@ -498,6 +499,8 @@ enum {
  	REQ_F_CLEAR_POLLIN	= BIT(REQ_F_CLEAR_POLLIN_BIT),
  	/* hashed into ->cancel_hash_locked, protected by ->uring_lock */
  	REQ_F_HASH_LOCKED	= BIT(REQ_F_HASH_LOCKED_BIT),
+	/* don't use lazy poll wake for this request */
+	REQ_F_POLL_NO_LAZY	= BIT(REQ_F_POLL_NO_LAZY_BIT),
  };
typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
diff --git a/io_uring/poll.c b/io_uring/poll.c
index d38d05edb4fa..4fed5514c379 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -366,11 +366,16 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts)
static void __io_poll_execute(struct io_kiocb *req, int mask)
  {
+	unsigned flags = 0;
+

Why flag when you can just check the exclusive flag
in the poll entry?

  	io_req_set_res(req, mask, 0);
  	req->io_task_work.func = io_poll_task_func;
trace_io_uring_task_add(req, mask);
-	__io_req_task_work_add(req, IOU_F_TWQ_LAZY_WAKE);
+
+	if (!(req->flags & REQ_F_POLL_NO_LAZY))
+		flags = IOU_F_TWQ_LAZY_WAKE;
+	__io_req_task_work_add(req, flags);
  }

--
Pavel Begunkov




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux