[PATCH V9 1/3] perf/x86/intel: Avoid pmu_disable/enable if !cpuc->enabled in sample read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>

The WARN_ON(this_cpu_read(cpu_hw_events.enabled)) in the
intel_pmu_save_and_restart_reload() is triggered, when sampling read
topdown events.

In a NMI handler, the cpu_hw_events.enabled is set and used to indicate
the status of core PMU. The generic pmu->pmu_disable_count, updated in
the perf_pmu_disable/enable pair, is not touched.
However, the perf_pmu_disable/enable pair is invoked when sampling read
in a NMI handler. The cpuc->enabled is mistakenly set by the
perf_pmu_enable().

Avoid perf_pmu_disable/enable() if the core PMU is already disabled.

Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics")
Signed-off-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
---

A new patch to fix the issue found on a legacy platform.
(Not related to counters snapshotting feature)

But since it also touches the sampling read code, the patches to enable
the counters snapshotting feature must be on top of the patch.
The patch itself can be applied separately.


 arch/x86/events/intel/core.c | 7 +++++--
 arch/x86/events/intel/ds.c   | 9 ++++++---
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 2a2824e9c50d..bce423ad3fad 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2778,15 +2778,18 @@ DEFINE_STATIC_CALL(intel_pmu_update_topdown_event, x86_perf_event_update);
 static void intel_pmu_read_topdown_event(struct perf_event *event)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	int pmu_enabled = cpuc->enabled;
 
 	/* Only need to call update_topdown_event() once for group read. */
 	if ((cpuc->txn_flags & PERF_PMU_TXN_READ) &&
 	    !is_slots_event(event))
 		return;
 
-	perf_pmu_disable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_disable(event->pmu);
 	static_call(intel_pmu_update_topdown_event)(event);
-	perf_pmu_enable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_enable(event->pmu);
 }
 
 static void intel_pmu_read_event(struct perf_event *event)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index ba74e1198328..81b6ec8e824e 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2096,11 +2096,14 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
 
 void intel_pmu_auto_reload_read(struct perf_event *event)
 {
-	WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+	int pmu_enabled = this_cpu_read(cpu_hw_events.enabled);
 
-	perf_pmu_disable(event->pmu);
+	WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+	if (pmu_enabled)
+		perf_pmu_disable(event->pmu);
 	intel_pmu_drain_pebs_buffer();
-	perf_pmu_enable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_enable(event->pmu);
 }
 
 /*
-- 
2.38.1





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux