[PATCH v2 2/5] MIPS/Perf-events: Remove erroneous check on active_events

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Port the following patch for ARM by Mark Rutland:

- 57ce9bb39b476accf8fba6e16aea67ed76ea523d
    ARM: 6902/1: perf: Remove erroneous check on active_events

    When initialising a PMU, there is a check to protect against races with
    other CPUs filling all of the available event slots. Since armpmu_add
    checks that an event can be scheduled, we do not need to do this at
    initialisation time. Furthermore the current code is broken because it
    assumes that atomic_inc_not_zero will unconditionally increment
    active_counts and then tries to decrement it again on failure.

    This patch removes the broken, redundant code.

Signed-off-by: Deng-Cheng Zhu <dczhu@xxxxxxxx>
Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
Cc: Paul Mackerras <paulus@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxx>
Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxxxxxxxxxx>
Cc: David Daney <david.daney@xxxxxxxxxx>
---
 arch/mips/kernel/perf_event_mipsxx.c |    5 -----
 1 files changed, 0 insertions(+), 5 deletions(-)

diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
index ab4c761..b5d6b3f 100644
--- a/arch/mips/kernel/perf_event_mipsxx.c
+++ b/arch/mips/kernel/perf_event_mipsxx.c
@@ -621,11 +621,6 @@ static int mipspmu_event_init(struct perf_event *event)
 		return -ENODEV;
 
 	if (!atomic_inc_not_zero(&active_events)) {
-		if (atomic_read(&active_events) > MIPS_MAX_HWEVENTS) {
-			atomic_dec(&active_events);
-			return -ENOSPC;
-		}
-
 		mutex_lock(&pmu_reserve_mutex);
 		if (atomic_read(&active_events) == 0)
 			err = mipspmu_get_irq();
-- 
1.7.1




[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux