Re: [PATCH v14 07/11] io_uring/zcrx: set pp memory provider for an rx queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/18/25 22:06, David Wei wrote:
On 2025-02-18 11:40, Kees Bakker wrote:
Op 15-02-2025 om 01:09 schreef David Wei:
Set the page pool memory provider for the rx queue configured for zero
copy to io_uring. Then the rx queue is reset using
netdev_rx_queue_restart() and netdev core + page pool will take care of
filling the rx queue from the io_uring zero copy memory provider.

For now, there is only one ifq so its destruction happens implicitly
during io_uring cleanup.

Reviewed-by: Jens Axboe <axboe@xxxxxxxxx>
Signed-off-by: David Wei <dw@xxxxxxxxxxx>
---
   io_uring/zcrx.c | 49 +++++++++++++++++++++++++++++++++++++++++--------
   1 file changed, 41 insertions(+), 8 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 8833879d94ba..7d24fc98b306 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
[...]
@@ -444,6 +475,8 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
         if (ctx->ifq)
           io_zcrx_scrub(ctx->ifq);
+
+    io_close_queue(ctx->ifq);
If ctx->ifq is NULL (which seems to be not unlikely given the if statement above)
then you'll get a NULL pointer dereference in io_close_queue().

The only caller of io_shutdown_zcrx_ifqs() is io_ring_exit_work() which
checks for ctx->ifq first. That does mean the ctx->ifq check is
redundant in this function though.

And despite the check being outside of the lock, it's even well
synchronised, but in any case we should just make it saner. Thanks
for letting know

--
Pavel Begunkov





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux