On Wed, 29 Jun 2022 at 12:58, Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx> wrote: > > When application runs in zero copy busy poll mode and does not receive a > single packet but only sends them, it is currently impossible to get > into napi_busy_loop() as napi_id is only marked on Rx side in > xsk_rcv_check(). In there, napi_id is being taken from xdp_rxq_info > carried by xdp_buff. From Tx perspective, we do not have access to it. > What we have handy is the xsk pool. The fact that the napi_id is not set unless set from the ingress side is actually "by design". It's CONFIG_NET_RX_BUSY_POLL after all. I followed the semantics of the regular busy-polling sockets. So, I wouldn't say it's a fix! The busy-polling in sendmsg is really just about "driving the RX busy-polling from another socket syscall". That being said, I definitely see that this is useful for AF_XDP sockets, but keep in mind that it sort of changes the behavior from regular sockets. And we'll get different behavior for copy-mode/zero-copy mode. TL;DR, I think it's a good addition. One small nit below: > + __sk_mark_napi_id_once(sk, xs->pool->heads[0].xdp.rxq->napi_id); Please hide this hideous pointer chasing in something neater: xsk_pool_get_napi_id() or something. Björn