From: Kan Liang <kan.liang@xxxxxxxxxxxxxxx> commit ef493f4b122d6b14a6de111d1acac1eab1d673b0 upstream. The BPF subsystem may capture LBR data on a counting event. However, the current implementation assumes that LBR can/should only be used with sampling events. For instance, retsnoop tool ([0]) makes an extensive use of this functionality and sets up perf event as follows: struct perf_event_attr attr; memset(&attr, 0, sizeof(attr)); attr.size = sizeof(attr); attr.type = PERF_TYPE_HARDWARE; attr.config = PERF_COUNT_HW_CPU_CYCLES; attr.sample_type = PERF_SAMPLE_BRANCH_STACK; attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL; To limit the LBR for a sampling event is to avoid unnecessary branch stack setup for a counting event in the sample read. Because LBR is only read in the sampling event's overflow. Although in most cases LBR is used in sampling, there is no HW limit to bind LBR to the sampling mode. Allow an LBR setup for a counting event unless in the sample read mode. Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag") Closes: https://lore.kernel.org/lkml/20240905180055.1221620-1-andrii@xxxxxxxxxx/ Reported-by: Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> Signed-off-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> Tested-by: Andrii Nakryiko <andrii@xxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Link: https://lore.kernel.org/r/20240909155848.326640-1-kan.liang@xxxxxxxxxxxxxxx Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/events/intel/core.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3912,8 +3912,12 @@ static int intel_pmu_hw_config(struct pe x86_pmu.pebs_aliases(event); } - if (needs_branch_stack(event) && is_sampling_event(event)) - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK; + if (needs_branch_stack(event)) { + /* Avoid branch stack setup for counting events in SAMPLE READ */ + if (is_sampling_event(event) || + !(event->attr.sample_type & PERF_SAMPLE_READ)) + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK; + } if (branch_sample_counters(event)) { struct perf_event *leader, *sibling; Patches currently in stable-queue which might be from kan.liang@xxxxxxxxxxxxxxx are queue-6.10/perf-sched-timehist-fix-missing-free-of-session-in-p.patch queue-6.10/perf-time-utils-fix-32-bit-nsec-parsing.patch queue-6.10/perf-build-fix-up-broken-capstone-feature-detection-.patch queue-6.10/perf-sched-timehist-fixed-timestamp-error-when-unabl.patch queue-6.10/perf-dwarf-aux-check-allowed-location-expressions-wh.patch queue-6.10/perf-x86-intel-allow-to-setup-lbr-for-counting-event-for-bpf.patch queue-6.10/perf-mem-free-the-allocated-sort-string-fixing-a-lea.patch queue-6.10/perf-callchain-fix-stitch-lbr-memory-leaks.patch queue-6.10/perf-inject-fix-leader-sampling-inserting-additional.patch queue-6.10/perf-annotate-data-fix-off-by-one-in-location-range-.patch queue-6.10/perf-report-fix-total-cycles-stdio-output-error.patch queue-6.10/perf-mem-check-mem_events-for-all-eligible-pmus.patch queue-6.10/perf-mem-fix-missed-p-core-mem-events-on-adl-and-rpl.patch queue-6.10/perf-lock-contention-change-stack_id-type-to-s32.patch queue-6.10/perf-dwarf-aux-handle-bitfield-members-from-pointer-.patch