On 11/1/24 2:16 PM, Mina Almasry wrote: > On Tue, Oct 29, 2024 at 4:06?PM David Wei <dw@xxxxxxxxxxx> wrote: >> >> From: David Wei <davidhwei@xxxxxxxx> >> >> Set the page pool memory provider for the rx queue configured for zero >> copy to io_uring. Then the rx queue is reset using >> netdev_rx_queue_restart() and netdev core + page pool will take care of >> filling the rx queue from the io_uring zero copy memory provider. >> >> For now, there is only one ifq so its destruction happens implicitly >> during io_uring cleanup. >> >> Signed-off-by: David Wei <dw@xxxxxxxxxxx> >> --- >> io_uring/zcrx.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++-- >> io_uring/zcrx.h | 2 ++ >> 2 files changed, 86 insertions(+), 2 deletions(-) >> >> diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c >> index 477b0d1b7b91..3f4625730dbd 100644 >> --- a/io_uring/zcrx.c >> +++ b/io_uring/zcrx.c >> @@ -8,6 +8,7 @@ >> #include <net/page_pool/helpers.h> >> #include <net/page_pool/memory_provider.h> >> #include <trace/events/page_pool.h> >> +#include <net/netdev_rx_queue.h> >> #include <net/tcp.h> >> #include <net/rps.h> >> >> @@ -36,6 +37,65 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio >> return container_of(owner, struct io_zcrx_area, nia); >> } >> >> +static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx) >> +{ >> + struct netdev_rx_queue *rxq; >> + struct net_device *dev = ifq->dev; >> + int ret; >> + >> + ASSERT_RTNL(); >> + >> + if (ifq_idx >= dev->num_rx_queues) >> + return -EINVAL; >> + ifq_idx = array_index_nospec(ifq_idx, dev->num_rx_queues); >> + >> + rxq = __netif_get_rx_queue(ifq->dev, ifq_idx); >> + if (rxq->mp_params.mp_priv) >> + return -EEXIST; >> + >> + ifq->if_rxq = ifq_idx; >> + rxq->mp_params.mp_ops = &io_uring_pp_zc_ops; >> + rxq->mp_params.mp_priv = ifq; >> + ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq); >> + if (ret) >> + goto fail; >> + return 0; >> +fail: >> + rxq->mp_params.mp_ops = NULL; >> + rxq->mp_params.mp_priv = NULL; >> + ifq->if_rxq = -1; >> + return ret; >> +} >> + > > I don't see a CAP_NET_ADMIN check. Likely I missed it. Is that done > somewhere? Binding user memory to an rx queue needs to be a privileged > operation. There's only one caller of this, and it literally has a CAP_NET_ADMIN at the very top. Patch 9 adds the registration. -- Jens Axboe