On Mon, Aug 30, 2021 at 2:42 PM Song Liu <songliubraving@xxxxxx> wrote: > > The typical way to access branch record (e.g. Intel LBR) is via hardware > perf_event. For CPUs with FREEZE_LBRS_ON_PMI support, PMI could capture > reliable LBR. On the other hand, LBR could also be useful in non-PMI > scenario. For example, in kretprobe or bpf fexit program, LBR could > provide a lot of information on what happened with the function. Add API > to use branch record for software use. > > Note that, when the software event triggers, it is necessary to stop the > branch record hardware asap. Therefore, static_call is used to remove some > branch instructions in this process. > > Signed-off-by: Song Liu <songliubraving@xxxxxx> > --- > arch/x86/events/intel/core.c | 24 ++++++++++++++++++++++-- > include/linux/perf_event.h | 24 ++++++++++++++++++++++++ > kernel/events/core.c | 3 +++ > 3 files changed, 49 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c > index ac6fd2dabf6a2..d28d0e12c112c 100644 > --- a/arch/x86/events/intel/core.c > +++ b/arch/x86/events/intel/core.c > @@ -2155,9 +2155,9 @@ static void __intel_pmu_disable_all(void) > > static void intel_pmu_disable_all(void) > { > + intel_pmu_lbr_disable_all(); > __intel_pmu_disable_all(); > intel_pmu_pebs_disable_all(); > - intel_pmu_lbr_disable_all(); > } > > static void __intel_pmu_enable_all(int added, bool pmi) > @@ -2186,6 +2186,20 @@ static void intel_pmu_enable_all(int added) > __intel_pmu_enable_all(added, false); > } > > +static int > +intel_pmu_snapshot_branch_stack(struct perf_branch_snapshot *br_snapshot) > +{ > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > + > + intel_pmu_disable_all(); > + intel_pmu_lbr_read(); > + memcpy(br_snapshot->entries, cpuc->lbr_entries, > + sizeof(struct perf_branch_entry) * x86_pmu.lbr_nr); > + br_snapshot->nr = x86_pmu.lbr_nr; > + intel_pmu_enable_all(0); > + return 0; > +} > + > /* > * Workaround for: > * Intel Errata AAK100 (model 26) > @@ -6283,9 +6297,15 @@ __init int intel_pmu_init(void) > x86_pmu.lbr_nr = 0; > } > > - if (x86_pmu.lbr_nr) > + if (x86_pmu.lbr_nr) { > pr_cont("%d-deep LBR, ", x86_pmu.lbr_nr); > > + /* only support branch_stack snapshot for perfmon >= v2 */ > + if (x86_pmu.disable_all == intel_pmu_disable_all) > + static_call_update(perf_snapshot_branch_stack, > + intel_pmu_snapshot_branch_stack); > + } > + > intel_pmu_check_extra_regs(x86_pmu.extra_regs); > > /* Support full width counters using alternative MSR range */ > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h > index fe156a8170aa3..1f42e91668024 100644 > --- a/include/linux/perf_event.h > +++ b/include/linux/perf_event.h > @@ -57,6 +57,7 @@ struct perf_guest_info_callbacks { > #include <linux/cgroup.h> > #include <linux/refcount.h> > #include <linux/security.h> > +#include <linux/static_call.h> > #include <asm/local.h> > > struct perf_callchain_entry { > @@ -1612,4 +1613,27 @@ extern void __weak arch_perf_update_userpage(struct perf_event *event, > extern __weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr); > #endif > > +/* > + * Snapshot branch stack on software events. > + * > + * Branch stack can be very useful in understanding software events. For > + * example, when a long function, e.g. sys_perf_event_open, returns an > + * errno, it is not obvious why the function failed. Branch stack could > + * provide very helpful information in this type of scenarios. > + * > + * On software event, it is necessary to stop the hardware branch recorder > + * fast. Otherwise, the hardware register/buffer will be flushed with > + * entries af the triggering event. Therefore, static call is used to > + * stop the hardware recorder. > + */ > +#define MAX_BRANCH_SNAPSHOT 32 Can you please make it an enum instead? It will make this available as a constant in vmlinux.h nicely, without users having to #define it every time. > + > +struct perf_branch_snapshot { > + unsigned int nr; > + struct perf_branch_entry entries[MAX_BRANCH_SNAPSHOT]; > +}; > + > +typedef int (perf_snapshot_branch_stack_t)(struct perf_branch_snapshot *); > +DECLARE_STATIC_CALL(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t); > + > #endif /* _LINUX_PERF_EVENT_H */ > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 011cc5069b7ba..22807864e913b 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -13437,3 +13437,6 @@ struct cgroup_subsys perf_event_cgrp_subsys = { > .threaded = true, > }; > #endif /* CONFIG_CGROUP_PERF */ > + > +DEFINE_STATIC_CALL_RET0(perf_snapshot_branch_stack, > + perf_snapshot_branch_stack_t); > -- > 2.30.2 >