One side effect that I have discovered from testing the napi_busy_poll patch, despite improving the network timing of the threads performing the busy poll, it is the networking performance degradation that it has on the rest of the system. I dedicate isolated CPUS to specific threads of my program. My kernel is compiled with CONFIG_NO_HZ_FULL. One thing that I have never really understood is why there were still kernel threads assigned to the isolated CPUs. $ CORENUM=2; ps -L -e -o pid,psr,cpu,cmd | grep -E "^[[:space:]]+[[:digit:]]+[[:space:]]+${CORENUM}" 24 2 - [cpuhp/2] 25 2 - [idle_inject/2] 26 2 - [migration/2] 27 2 - [ksoftirqd/2] 28 2 - [kworker/2:0-events] 29 2 - [kworker/2:0H] 83 2 - [kworker/2:1-mm_percpu_wq] It is very hard to keep the CPU 100% tickless if there are still tasks assigned to isolated CPUs by the kernel. This question isn't really answered anywhere AFAIK: https://www.kernel.org/doc/html/latest/timers/no_hz.html https://jeremyeder.com/2013/11/15/nohz_fullgodmode/ Those threads running on their dedicated CPUS are the ones doing the NAPI busy polling. Because of that, those CPUs usage ramp up to 100% and running ping on the side is now having horrible numbers: [2022-02-19 07:27:54] INFO SOCKPP/ping ping results for 10 loops: 0. 104.16.211.191 rtt min/avg/max/mdev = 9.926/34.987/80.048/17.016 ms 1. 104.16.212.191 rtt min/avg/max/mdev = 9.861/34.934/79.986/17.019 ms 2. 104.16.213.191 rtt min/avg/max/mdev = 9.876/34.949/79.965/16.997 ms 3. 104.16.214.191 rtt min/avg/max/mdev = 9.852/34.927/79.977/17.019 ms 4. 104.16.215.191 rtt min/avg/max/mdev = 9.869/34.943/79.958/16.997 ms Doing this: echo 990000 > /proc/sys/kernel/sched_rt_runtime_us as instructed here: https://www.kernel.org/doc/html/latest/scheduler/sched-rt-group.html fix the problem: $ ping 104.16.211.191 PING 104.16.211.191 (104.16.211.191) 56(84) bytes of data. 64 bytes from 104.16.211.191: icmp_seq=1 ttl=62 time=1.05 ms 64 bytes from 104.16.211.191: icmp_seq=2 ttl=62 time=0.812 ms 64 bytes from 104.16.211.191: icmp_seq=3 ttl=62 time=0.864 ms 64 bytes from 104.16.211.191: icmp_seq=4 ttl=62 time=0.846 ms 64 bytes from 104.16.211.191: icmp_seq=5 ttl=62 time=1.23 ms 64 bytes from 104.16.211.191: icmp_seq=6 ttl=62 time=0.957 ms 64 bytes from 104.16.211.191: icmp_seq=7 ttl=62 time=1.10 ms ^C --- 104.16.211.191 ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6230ms rtt min/avg/max/mdev = 0.812/0.979/1.231/0.142 ms If I was to guess, I would say that it is ksoftirqd on those CPUs that is starving and is not servicing the network packets but I wish that I had a better understanding of what is really happening and know if it would be possible to keep 100% those processors dedicated to my tasks and have the network softirqs handled somewhere else to not have to tweak /proc/sys/kernel/sched_rt_runtime_us to fix the issue...