[PATCH liburing 1/2] __io_uring_get_cqe(): Use io_uring_for_each_cqe()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Use io_uring_for_each_cqe() inside __io_uring_get_cqe() such that it
becomes possible to test the io_uring_for_each_cqe() implementation
from inside the liburing project.

Cc: Stefan Hajnoczi <stefanha@xxxxxxxxxx>
Cc: Roman Penyaev <rpenyaev@xxxxxxx>
Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx>
---
 src/queue.c | 18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/src/queue.c b/src/queue.c
index 85e0c1e0d10f..bec363fc0ebf 100644
--- a/src/queue.c
+++ b/src/queue.c
@@ -14,26 +14,14 @@
 static int __io_uring_get_cqe(struct io_uring *ring,
 			      struct io_uring_cqe **cqe_ptr, int wait)
 {
-	struct io_uring_cq *cq = &ring->cq;
-	const unsigned mask = *cq->kring_mask;
 	unsigned head;
 	int ret;
 
-	*cqe_ptr = NULL;
-	head = *cq->khead;
 	do {
-		/*
-		 * It's necessary to use a read_barrier() before reading
-		 * the CQ tail, since the kernel updates it locklessly. The
-		 * kernel has the matching store barrier for the update. The
-		 * kernel also ensures that previous stores to CQEs are ordered
-		 * with the tail update.
-		 */
-		read_barrier();
-		if (head != *cq->ktail) {
-			*cqe_ptr = &cq->cqes[head & mask];
+		io_uring_for_each_cqe(ring, head, *cqe_ptr)
+			break;
+		if (*cqe_ptr)
 			break;
-		}
 		if (!wait)
 			break;
 		ret = io_uring_enter(ring->ring_fd, 0, 1,
-- 
2.22.0.410.gd8fdbe21b5-goog




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux