Re: [PATCH 1/2] io_uring: split logic of force_nonblock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2021/10/18 下午8:27, Pavel Begunkov 写道:
On 10/18/21 11:29, Hao Xu wrote:
Currently force_nonblock stands for three meanings:
  - nowait or not
  - in an io-worker or not(hold uring_lock or not)

Let's split the logic to two flags, IO_URING_F_NONBLOCK and
IO_URING_F_UNLOCKED for convenience of the next patch.

Signed-off-by: Hao Xu <haoxu@xxxxxxxxxxxxxxxxx>
---
  fs/io_uring.c | 50 ++++++++++++++++++++++++++++----------------------
  1 file changed, 28 insertions(+), 22 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b6da03c26122..727cad6c36fc 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -199,6 +199,7 @@ struct io_rings {
  enum io_uring_cmd_flags {
      IO_URING_F_COMPLETE_DEFER    = 1,
+    IO_URING_F_UNLOCKED        = 2,
      /* int's last bit, sign checks are usually faster than a bit test */
      IO_URING_F_NONBLOCK        = INT_MIN,
  };
@@ -2926,7 +2927,7 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
              struct io_ring_ctx *ctx = req->ctx;
              req_set_fail(req);
-            if (!(issue_flags & IO_URING_F_NONBLOCK)) {
+            if (issue_flags & IO_URING_F_UNLOCKED) {
                  mutex_lock(&ctx->uring_lock);
                  __io_req_complete(req, issue_flags, ret, cflags);
                  mutex_unlock(&ctx->uring_lock);
@@ -3036,7 +3037,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
  {
      struct io_buffer *kbuf = req->kbuf;
      struct io_buffer *head;
-    bool needs_lock = !(issue_flags & IO_URING_F_NONBLOCK);
+    bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
      if (req->flags & REQ_F_BUFFER_SELECTED)
          return kbuf;
@@ -3341,7 +3342,7 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
      int ret;
      /* submission path, ->uring_lock should already be taken */
-    ret = io_import_iovec(rw, req, &iov, &iorw->s, IO_URING_F_NONBLOCK);
+    ret = io_import_iovec(rw, req, &iov, &iorw->s, 0);
      if (unlikely(ret < 0))
          return ret;
@@ -3452,6 +3453,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
      struct iovec *iovec;
      struct kiocb *kiocb = &req->rw.kiocb;
      bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+    bool in_worker = issue_flags & IO_URING_F_UNLOCKED;

io_read shouldn't have notion of worker or whatever. I'd say let's
leave only force_nonblock here.

I assume 2/2 relies ot it, but if so you can make sure it ends up
in sync (!force_nonblock) at some point if all other ways fail.
I re-read the code, found you're right, will send v3.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux