[PATCH v2] io_uring: improve task work cache utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While profiling task_work intensive workloads, I noticed that most of
the time in tctx_task_work() is spending stalled on loading 'req'. This
is one of the unfortunate side effects of using linked lists,
particularly when they end up being passe around.

Prefetch the next request, if there is one. There's a sufficient amount
of work in between that this makes it available for the next loop.

While fiddling with the cache layout, move the link outside of the
hot completion cacheline. It's rarely used in hot workloads, so better
to bring in kbuf which is used for networked loads with provided buffers.

This reduces tctx_task_work() overhead from ~3% to 1-1.5% in my testing.

Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>

---

v2 - it's better to not move io_task_work, as it then moves both fixed
buffers and file refs to the next cacheline. Instead, just prefetch
the right cacheline instead. Move link as well, which brings kbuf into
where it should be.

diff --git a/fs/io_uring.c b/fs/io_uring.c
index a76e91fe277c..37150ca89289 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -928,7 +928,6 @@ struct io_kiocb {
 	struct io_wq_work_node		comp_list;
 	atomic_t			refs;
 	atomic_t			poll_refs;
-	struct io_kiocb			*link;
 	struct io_task_work		io_task_work;
 	/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
 	struct hlist_node		hash_node;
@@ -939,6 +938,7 @@ struct io_kiocb {
 	/* custom credentials, valid IFF REQ_F_CREDS is set */
 	/* stores selected buf, valid IFF REQ_F_BUFFER_SELECTED is set */
 	struct io_buffer		*kbuf;
+	struct io_kiocb			*link;
 	const struct cred		*creds;
 	struct io_wq_work		work;
 };
@@ -2450,6 +2450,11 @@ static void handle_prev_tw_list(struct io_wq_work_node *node,
 		struct io_wq_work_node *next = node->next;
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 						    io_task_work.node);
+		struct io_kiocb *nxt = container_of(next, struct io_kiocb,
+						    io_task_work.node);
+
+		if (next)
+			prefetch(nxt);
 
 		if (req->ctx != *ctx) {
 			if (unlikely(!*uring_locked && *ctx))
@@ -2482,6 +2487,11 @@ static void handle_tw_list(struct io_wq_work_node *node,
 		struct io_wq_work_node *next = node->next;
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 						    io_task_work.node);
+		struct io_kiocb *nxt = container_of(next, struct io_kiocb,
+						    io_task_work.node);
+
+		if (next)
+			prefetch(nxt);
 
 		if (req->ctx != *ctx) {
 			ctx_flush_and_put(*ctx, locked);

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux