On 2/18/22 13:05, Olivier Langlois wrote:
On Wed, 2022-02-16 at 20:14 +0800, Hao Xu wrote:
@@ -5583,6 +5650,7 @@ static void io_poll_task_func(struct io_kiocb
*req, bool *locked)
struct io_ring_ctx *ctx = req->ctx;
int ret;
+ io_add_napi(req->file, req->ctx);
ret = io_poll_check_events(req);
if (ret > 0)
return;
@@ -5608,6 +5676,7 @@ static void io_apoll_task_func(struct io_kiocb
*req, bool *locked)
struct io_ring_ctx *ctx = req->ctx;
int ret;
+ io_add_napi(req->file, req->ctx);
ret = io_poll_check_events(req);
if (ret > 0)
return;
I have a doubt about these call sites for adding the napi_id into the
list. AFAIK, these are the functions called when the desired events are
ready therefore, it is too late for polling the device.
[1]
OTOH, my choice of doing it from io_file_get_normal() was perhaps a
poor choice too because it is premature.
Possibly the best location might be __io_arm_poll_handler()...
Hi Oliver,
Have you tried just issue one recv/pollin request and observe the
napi_id? From my understanding of the network stack, the napi_id
of a socket won't be valid until it gets some packets. Because before
that moment, busy_poll doesn't know which hw queue to poll.
In other words, the idea of NAPI polling is: the packets of a socket
can be from any hw queue of a net adapter, but we just poll the
queue which just received some data. So to get this piece of info,
there must be some data coming to one queue, before doing the
busy_poll. Correct me if I'm wrong since I'm also a newbie of
network stuff...
I was considering to poll all the rx rings, but it seemed to be not
efficient from some tests by my colleague.
for question [1] you mentioned, I think it's ok, since:
- not all the data has been ready at that moment
- the polling is not just for that request, there may be more data comming
from the rx ring since we usually use polling mode under high workload
pressure.
See the implementation of epoll busy_poll, the same thing.
Regards,
Hao