4.1.27-rt31-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Trace events like raw_syscalls show always a preempt code of one. The reason is that on PREEMPT kernels rcu_read_lock_sched_notrace() increases the preemption counter and the function recording the counter is caller within the RCU section. Cc: stable-rt@xxxxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> [ Changed this to upstream version. See commit e947841c0dce ] Signed-off-by: Steven Rostedt <rostedt@xxxxxxxxxxx> --- kernel/trace/trace_events.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 64c1ac17af8f..b83d6a4d3912 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -200,6 +200,14 @@ void *ftrace_event_buffer_reserve(struct ftrace_event_buffer *fbuffer, local_save_flags(fbuffer->flags); fbuffer->pc = preempt_count(); + /* + * If CONFIG_PREEMPT is enabled, then the tracepoint itself disables + * preemption (adding one to the preempt_count). Since we are + * interested in the preempt_count at the time the tracepoint was + * hit, we need to subtract one to offset the increment. + */ + if (IS_ENABLED(CONFIG_PREEMPT)) + fbuffer->pc--; fbuffer->ftrace_file = ftrace_file; fbuffer->event = -- 2.8.1 -- To unsubscribe from this list: send the line "unsubscribe stable-rt" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html