On Wed, 2022-02-16 at 11:12 +0800, Hao Xu wrote: > > > > I read your code, I guess the thing is the sk->napi_id is set from > skb->napi_id and the latter is set when the net device received some > packets. > > With my current knowledge, it makes little sense why busy polling > > would > > not be possible with RPS. Also, what exactly is a NAPI device is > > quite > > nebulous to me... Looking into the Intel igb driver code, it seems > > like > > 1 NAPI device is created for each interrupt vector/Rx buffer of the > > device. > AFAIK, yes, each Rx ring has its own NAPI. > > > > Bottomline, it seems like I have fallen into a new rabbit hole. It > > may > > take me a day or 2 to figure it all... you are welcome to enlight > > me if > > you know a thing or 2 about those topics... I am kinda lost right > > now... > > > My dive into the net/core code has been beneficial! I have found out that the reason why I did not have napi_id for my sockets is because I have introduced a local SOCKS proxy into my setup. By using the loopback device, this is de facto removing NAPI out of the picture. After having fixed this issue, I have started to test my code. The modified io_cqring_wait() code does not work. With a pending recv() request, the moment napi_busy_loop() is called, the recv() request fails with an EFAULT error. I suspect this might be because io_busy_loop_end() is doing something that is not allowed while inside napi_busy_loop(). The simpler code change inside __io_sq_thread() might work but I still have to validate. I'll update later today or tomorrow with the latest result and discovery! Greetings,