From: Deng-Cheng Zhu <dengcheng.zhu@xxxxxxxxxx> According to Software User's Manual, the event of last-level-cache read/write misses is mapped to even counters. Odd counters of that event number count miss cycles. Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@xxxxxxxxxx> Signed-off-by: Markos Chandras <markos.chandras@xxxxxxxxxx> --- This patch is for the upstream-sfr/mips-for-linux-next tree --- arch/mips/kernel/perf_event_mipsxx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c index 45f1ffc..24cdf64 100644 --- a/arch/mips/kernel/perf_event_mipsxx.c +++ b/arch/mips/kernel/perf_event_mipsxx.c @@ -971,11 +971,11 @@ static const struct mips_perf_event mipsxx74Kcore_cache_map [C(LL)] = { [C(OP_READ)] = { [C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P }, - [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P }, + [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P }, }, [C(OP_WRITE)] = { [C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P }, - [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P }, + [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P }, }, }, [C(ITLB)] = { -- 1.8.3.2