On Tue, Aug 13, 2024 at 4:39 AM Mina Almasry <almasrymina@xxxxxxxxxx> wrote: > > On Mon, Aug 12, 2024 at 8:15 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > > BTW, Mina, the core should probably also check that XDP isn't installed > > before / while the netmem is bound to a queue. > > Sorry if noob question, but what is the proper check for this? I tried > adding this to net_devmem_bind_dmabuf_to_queue(): > > if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) > return -EEXIST; > > But quickly found out that in netif_alloc_rx_queues() we initialize > all the rxq->xdp_rxq to state REGISTERED regardless whether xdp is > installed or not, so this check actually fails. > > Worthy of note is that GVE holds an instance of xdp_rxq_info in > gve_rx_ring, and seems to use that for its xdp information, not the > one that hangs off of netdev_rx_queue in core. > To elaborate further, in order to disable binding dmabuf and XDP on the same rx queue for GVE, AFAIT the check would need to be inside of GVE. Inside of GVE I'd check if gve_priv->xdp_prog is installed, and check if the gve_rx_ring->xdp_info is registered. If so, then the rx queue is XDP enabled, and should not be bound to dmabuf. I think that would work. At the moment I can't think of a check inside of core that would be compatible with GVE, but above you clearly are specifically asking for a check in core. Any pointers to what you have in mind would be appreciated here, but I'll try to take a deeper look. > Additionally, my understanding of XDP is limited, but why do we want > to disable it? My understanding is that XDP is a kernel bypass that > hands the data directly to userspace. In theory at least there should > be no issue binding dmabuf to a queue, then getting the data in the > queue via an XDP program instead of via TCP sockets or io uring. Is > there some fundamental reason why dmabuf and XDP are incompatible? > > -- > Thanks, > Mina -- Thanks, Mina