On Thu, Oct 14, 2021 at 09:58:38PM -0500, madvenka@xxxxxxxxxxxxxxxxxxx wrote: > From: "Madhavan T. Venkataraman" <madvenka@xxxxxxxxxxxxxxxxxxx> > > Currently, perf_callchain_kernel() in ARM64 code walks the stack using > start_backtrace() and walk_stackframe(). Make it use arch_stack_walk() > instead. This makes maintenance easier. > > Signed-off-by: Madhavan T. Venkataraman <madvenka@xxxxxxxxxxxxxxxxxxx> This looks good to me; bailing out when perf_callchain_store() can't accept any more entries absolutely makes sense. I gave this a spin with: | # perf record -g -c1 ls | # perf report ... and the recorded callchains look sane. Reviewed-by: Mark Rutland <mark.rutland@xxxxxxx> Tested-by: Mark Rutland <mark.rutland@xxxxxxx> As mentioned on patch 1, I'd like to get this rebased atop Peter's untangling of ARCH_STACKWALK from STACKTRACE. Thanks, Mark. > --- > arch/arm64/kernel/perf_callchain.c | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c > index 4a72c2727309..f173c448e852 100644 > --- a/arch/arm64/kernel/perf_callchain.c > +++ b/arch/arm64/kernel/perf_callchain.c > @@ -140,22 +140,18 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, > static bool callchain_trace(void *data, unsigned long pc) > { > struct perf_callchain_entry_ctx *entry = data; > - perf_callchain_store(entry, pc); > - return true; > + return perf_callchain_store(entry, pc) == 0; > } > > void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, > struct pt_regs *regs) > { > - struct stackframe frame; > - > if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { > /* We don't support guest os callchain now */ > return; > } > > - start_backtrace(&frame, regs->regs[29], regs->pc); > - walk_stackframe(current, &frame, callchain_trace, entry); > + arch_stack_walk(callchain_trace, entry, current, regs); > } > > unsigned long perf_instruction_pointer(struct pt_regs *regs) > -- > 2.25.1 >