(this time with a proper subject) Hello, I've been testing out the 5.10 rt kernel[1], and now I'm trying to track down a networking latency issue. Generally this works well, I have a user thread (priority 99) and the driver (macb) rx irq thread (priority 98). With tracing (kernelshark is very cool btw) I can see the normal flow, the irq thread does the netif_receive_skb: softirq_raise softirq_entry napi_gro_receive_entry napi_gro_receive_exit netif_receive_skb sched_waking sched_wakeup softirq_exit sched_switch Then the user thread does: sys_exit_recvmsg But then in the long latency case, instead of going straight to softirq_entry this is delayed and it does the softirq_entry from the ksoftirqd task instead of the irq/30-eth%d task. The problem is that before ksoftirqd runs, another lower priority task runs (in this case it's the macb tx interrupt thread at priority 80, but I've seen this be user threads in slightly different tests). It seems the paths diverge with a call to __kthread_should_park. This is on an arm64 zynqmp platform. Any thoughts on how this could be improved would be appreciated. Cyclictest seems OK, a 10 minute run with stress-ng running shows a maximum latency of 47us # ./cyclictest -S -m -p 99 -d 0 -i 200 -D 10m WARN: stat /dev/cpu_dma_latency failed: No such file or directory policy: fifo: loadavg: 15.99 15.74 13.41 13/152 2605 T: 0 ( 5687) P:99 I:200 C:2999965 Min: 6 Act: 12 Avg: 11 Max: 39 T: 1 ( 5701) P:99 I:200 C:2999647 Min: 8 Act: 12 Avg: 10 Max: 40 T: 2 ( 5720) P:99 I:200 C:2999343 Min: 8 Act: 11 Avg: 10 Max: 41 T: 3 ( 5740) P:99 I:200 C:2999056 Min: 7 Act: 10 Avg: 11 Max: 47 -Paul [1] I saw the v5.10-rc6-rt14 announcement yesterday, and rebased to that to for the tracing mentioned here