On Tue, 2014-02-11 at 14:49 -0500, Steven Rostedt wrote: > commit a4c35ed241129dd142be4cadb1e5a474a56d5464 upstream. > > The synchronization needed after ftrace_ops are unregistered must happen > after the callback is disabled from becing called by functions. > > The current location happens after the function is being removed from the > internal lists, but not after the function callbacks were disabled, leaving > the functions susceptible of being called after their callbacks are freed. > > This affects perf and any externel users of function tracing (LTTng and > SystemTap). > > Cc: stable@xxxxxxxxxxxxxxx # 3.0+ > Fixes: cdbe61bfe704 "ftrace: Allow dynamically allocated function tracers" > Signed-off-by: Steven Rostedt <rostedt@xxxxxxxxxxx> [...] I've queued this up for 3.2, but I needed to make further changes. My version is below. Ben. --- From: "Steven Rostedt (Red Hat)" <rostedt@xxxxxxxxxxx> Date: Mon, 13 Jan 2014 12:56:21 -0500 Subject: ftrace: Fix synchronization location disabling and freeing ftrace_ops commit a4c35ed241129dd142be4cadb1e5a474a56d5464 upstream. The synchronization needed after ftrace_ops are unregistered must happen after the callback is disabled from becing called by functions. The current location happens after the function is being removed from the internal lists, but not after the function callbacks were disabled, leaving the functions susceptible of being called after their callbacks are freed. This affects perf and any externel users of function tracing (LTTng and SystemTap). Fixes: cdbe61bfe704 "ftrace: Allow dynamically allocated function tracers" Signed-off-by: Steven Rostedt <rostedt@xxxxxxxxxxx> [bwh: Backported to 3.2: drop change for control ops] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> --- kernel/trace/ftrace.c | 58 ++++++++++++++++++++++++++++----------------------- 1 file changed, 32 insertions(+), 26 deletions(-) --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -319,17 +319,6 @@ static int __unregister_ftrace_function( if (ftrace_enabled) update_ftrace_function(); - /* - * Dynamic ops may be freed, we must make sure that all - * callers are done before leaving this function. - * - * Again, normal synchronize_sched() is not good enough. - * We need to do a hard force of sched synchronization. - */ - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) - schedule_on_each_cpu(ftrace_sync); - - return 0; } @@ -1790,6 +1779,24 @@ static int ftrace_shutdown(struct ftrace return 0; ftrace_run_update_code(command); + + /* + * Dynamic ops may be freed, we must make sure that all + * callers are done before leaving this function. + * The same goes for freeing the per_cpu data of the control + * ops. + * + * Again, normal synchronize_sched() is not good enough. + * We need to do a hard force of sched synchronization. + * This is because we use preempt_disable() to do RCU, but + * the function tracers can be called where RCU is not watching + * (like before user_exit()). We can not rely on the RCU + * infrastructure to do the synchronization, thus we must do it + * ourselves. + */ + if (ops->flags & (FTRACE_OPS_FL_DYNAMIC)) + schedule_on_each_cpu(ftrace_sync); + return 0; } -- Ben Hutchings If more than one person is responsible for a bug, no one is at fault.
Attachment:
signature.asc
Description: This is a digitally signed message part