On 6/10/22 18:21, Pavel Begunkov wrote:
On 6/8/22 12:12, Hao Xu wrote:
From: Hao Xu <howeyxu@xxxxxxxxxxx>
Add a new io_hash_bucket structure so that each bucket in cancel_hash
has separate spinlock. Use per entry lock for cancel_hash, this removes
some completion lock invocation and remove contension between different
cancel_hash entries.
Signed-off-by: Hao Xu <howeyxu@xxxxxxxxxxx>
---
v1->v2:
- Add per entry lock for poll/apoll task work code which was missed
in v1
- add an member in io_kiocb to track req's indice in cancel_hash
v2->v3:
- make struct io_hash_bucket align with cacheline to avoid cacheline
false sharing.
- re-calculate hash value when deleting an entry from cancel_hash.
(cannot leverage struct io_poll to store the indice since it's
already 64 Bytes)
io_uring/cancel.c | 14 +++++++--
io_uring/cancel.h | 6 ++++
io_uring/fdinfo.c | 9 ++++--
io_uring/io_uring.c | 8 +++--
io_uring/io_uring_types.h | 2 +-
io_uring/poll.c | 64 +++++++++++++++++++++------------------
6 files changed, 65 insertions(+), 38 deletions(-)
diff --git a/io_uring/cancel.c b/io_uring/cancel.c
index 83cceb52d82d..bced5d6b9294 100644
--- a/io_uring/cancel.c
+++ b/io_uring/cancel.c
@@ -93,14 +93,14 @@ int io_try_cancel(struct io_kiocb *req, struct
io_cancel_data *cd)
if (!ret)
return 0;
- spin_lock(&ctx->completion_lock);
ret = io_poll_cancel(ctx, cd);
if (ret != -ENOENT)
goto out;
+ spin_lock(&ctx->completion_lock);
if (!(cd->flags & IORING_ASYNC_CANCEL_FD))
ret = io_timeout_cancel(ctx, cd);
-out:
spin_unlock(&ctx->completion_lock);
+out:
return ret;
}
@@ -192,3 +192,13 @@ int io_async_cancel(struct io_kiocb *req,
unsigned int issue_flags)
io_req_set_res(req, ret, 0);
return IOU_OK;
}
+
+inline void init_hash_table(struct io_hash_bucket *hash_table,
unsigned size)
Not inline, it can break builds
What do you mean? It's compiled well.