From: Uros Bizjak <ubizjak@xxxxxxxxx> [ Upstream commit eb887c4567d1b0e7684c026fe7df44afa96589e6 ] Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref) to use optimized implementation and ease register pressure around the primitive for targets that implement optimized variant. Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx> Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> Link: https://lore.kernel.org/20241007085651.48544-1-ubizjak@xxxxxxxxx Signed-off-by: Uros Bizjak <ubizjak@xxxxxxxxx> Signed-off-by: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- kernel/trace/trace_clock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c index 4702efb00ff21..4cb2ebc439be6 100644 --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -154,5 +154,5 @@ static atomic64_t trace_counter; */ u64 notrace trace_clock_counter(void) { - return atomic64_add_return(1, &trace_counter); + return atomic64_inc_return(&trace_counter); } -- 2.43.0