On Mon, Dec 20, 2021 at 5:24 PM Jens Axboe <axboe@xxxxxxxxx> wrote: > > On 12/20/21 12:49 PM, Wander Costa wrote: > > On Mon, Dec 20, 2021 at 4:38 PM Jens Axboe <axboe@xxxxxxxxx> wrote: > >> > >> On 12/20/21 12:28 PM, Wander Lairson Costa wrote: > >>> The running_trace_lock protects running_trace_list and is acquired > >>> within the tracepoint which implies disabled preemption. The spinlock_t > >>> typed lock can not be acquired with disabled preemption on PREEMPT_RT > >>> because it becomes a sleeping lock. > >>> The runtime of the tracepoint depends on the number of entries in > >>> running_trace_list and has no limit. The blk-tracer is considered debug > >>> code and higher latencies here are okay. > >> > >> You didn't put a changelog in here. Was this one actually compiled? Was > >> it runtime tested? > > > > It feels like the changelog reached the inboxes after patch (at least > > mine was so). Would you like that I send a v6 in the hope things > > arrive in order? > > Not sure how you are sending them, but they don't appear to thread > properly. But the changelog in the cover letter isn't really a > changelog, it doesn't say what changed. > Sorry, I think I was too brief in my explanation. I am backporting this patch to the RHEL 9 kernel (which runs kernel 5.14). I mistakenly generated the v4 patch from that tree, but it lacks this piece @@ -1608,9 +1608,9 @@ static int blk_trace_remove_queue(struct request_queue *q) if (bt->trace_state == Blktrace_running) { bt->trace_state = Blktrace_stopped; - spin_lock_irq(&running_trace_lock); + raw_spin_lock_irq(&running_trace_lock); list_del_init(&bt->running_list); - spin_unlock_irq(&running_trace_lock); + raw_spin_unlock_irq(&running_trace_lock); relay_flush(bt->rchan); } Causing the build error. v5 adds that. Sorry again for the confusion. > -- > Jens Axboe >