On Mon, 2023-10-02 at 12:05 +0200, Sebastian Andrzej Siewior wrote: > On 2023-09-26 15:15:46 [+0200], Mike Galbraith wrote: > > On Tue, 2023-09-26 at 12:30 +0000, Clark Williams wrote: > > > On Tue, Sep 26, 2023 at 7:05 AM Mike Galbraith <efault@xxxxxx> wrote: > > > > On Mon, 2023-09-25 at 18:30 +0200, g.medini@xxxxxxxxxxx wrote: > > > > > # tracer: wakeup_rt > > > > > # > > > > > # wakeup_rt latency trace v1.1.5 on 5.19.0-rt10 > > > > > # -------------------------------------------------------------------- > > > > > # latency: 357 us, #401/401, CPU#0 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:2) > > > > > # ----------------- > > > > > # | task: ktimers/0-15 (uid:0 nice:0 policy:1 rt_prio:1) > > > > > # ----------------- > > > > > > > > The first thing that pokes me in the eye is that priority. I'd bump > > > > that a lot. As it sits, anything high priority ktimers may wake when > > > > it finally gets the CPU gets to enjoy all the latency ktimers is eating > > > > in this trace due to it having been deemed relatively unimportant. > > > > > > Hmmm, IRQs are running at FIFO:50 by default. Do we want the ktimer > > > running above the IRQ service thread? > > > > I think so yeah, quick like bunny wakeup resource should punch through. > > Why not perform all wakes from hardirq then? Sounds good to me iff we're talking about a dinky irq width delta. Threads bundling up what are otherwise irq context cycles is loaded with goodness, but static priority leaves you holding a bill and paying context switch fees on top. Pick your poison carefully applies I suppose. A tiny swig of hemlock can't do _too_ much harm, right ;-) -Mike