The mem access loop currently features ISB barriers only. However the mem_access loop counts the number of accesses to memory. ISB do not garantee the PE cannot reorder memory access. Let's add a DSB ISH before the write to PMCR_EL0 that enables the PMU to make sure any previous memory access aren't counted in the loop, another one after the PMU gets enabled (to make sure loop memory accesses cannot be reordered before the PMU gets enabled) and a last one after the last iteration, before disabling the PMU. Signed-off-by: Eric Auger <eric.auger@xxxxxxxxxx> Suggested-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> --- v1 -> v2: - added yet another DSB after PMU enabled as suggested by Alexandru This was discussed in https://lore.kernel.org/all/YzxmHpV2rpfaUdWi@monolith.localdoman/ --- arm/pmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arm/pmu.c b/arm/pmu.c index 51c0fe80..74dd4c10 100644 --- a/arm/pmu.c +++ b/arm/pmu.c @@ -301,13 +301,16 @@ static void mem_access_loop(void *addr, long loop, uint32_t pmcr) { uint64_t pmcr64 = pmcr; asm volatile( + " dsb ish\n" " msr pmcr_el0, %[pmcr]\n" " isb\n" + " dsb ish\n" " mov x10, %[loop]\n" "1: sub x10, x10, #1\n" " ldr x9, [%[addr]]\n" " cmp x10, #0x0\n" " b.gt 1b\n" + " dsb ish\n" " msr pmcr_el0, xzr\n" " isb\n" : -- 2.38.1