Add GRO support to cpumap codebase moving the cpu_map_entry kthread to a NAPI-kthread pinned to the selected cpu. Introduce napi_init_for_gro utility routine to initialize napi struct subfields not dependent by net_device pointer in order to not add net_device dependency in the cpumap_entry. This series has been tested by Daniel using tcp_rr and tcp_stream: Baseline (again) ./tcp_rr -c -H $TASK_IP -p 50,90,99 -T4 -F8 -l30 ./tcp_stream -c -H $TASK_IP -T8 -F16 -l30 Transactions Latency P50 (s) Latency P90 (s) Latency P99 (s) Throughput (Mbit/s) Run 1 2560252 0.00009087 0.00010495 0.00011647 Run 1 15479.31 Run 2 2665517 0.00008575 0.00010239 0.00013311 Run 2 15162.48 Run 3 2755939 0.00008191 0.00010367 0.00012287 Run 3 14709.04 Run 4 2595680 0.00008575 0.00011263 0.00012671 Run 4 15373.06 Run 5 2841865 0.00007999 0.00009471 0.00012799 Run 5 15234.91 Average 2683850.6 0.000084854 0.00010367 0.00012543 Average 15191.76 cpumap NAPI patches v2 Transactions Latency P50 (s) Latency P90 (s) Latency P99 (s) Throughput (Mbit/s) Run 1 2577838 0.00008575 0.00012031 0.00013695 Run 1 19914.56 Run 2 2729237 0.00007551 0.00013311 0.00017663 Run 2 20140.92 Run 3 2689442 0.00008319 0.00010495 0.00013311 Run 3 19887.48 Run 4 2862366 0.00008127 0.00009471 0.00010623 Run 4 19374.49 Run 5 2700538 0.00008319 0.00010367 0.00012799 Run 5 19784.49 Average 2711884.2 0.000081782 0.00011135 0.000136182 Average 19820.388 Delta 1.04% -3.62% 7.41% 8.57% 30.47% IIUC, to be 100% honest, the above results have been obtained running the proposed series with a different kernel version. --- Lorenzo Bianconi (3): net: Add napi_init_for_gro utility routine net: add napi_threaded_poll to netdevice.h bpf: cpumap: Add gro support include/linux/netdevice.h | 3 ++ kernel/bpf/cpumap.c | 125 +++++++++++++++++++--------------------------- net/core/dev.c | 21 +++++--- 3 files changed, 70 insertions(+), 79 deletions(-) --- base-commit: c8d02b547363880d996f80c38cc8b997c7b90725 change-id: 20241129-cpumap-gro-431ffd03aa5e Best regards, -- Lorenzo Bianconi <lorenzo@xxxxxxxxxx>