From: Like Xu <likexu@xxxxxxxxxxx> The pmu test check_counter_overflow() always fails with 32-bit binaries. The cnt.count obtained from the latter run of measure() (based on fixed counter 0) is not equal to the expected value (based on gp counter 0) and there is a positive error with a value of 2. The two extra instructions come from inline wrmsr() and inline rdmsr() inside the global_disable() binary code block. Specifically, for each msr access, the i386 code will have two assembly mov instructions before rdmsr/wrmsr (mark it for fixed counter 0, bit 32), but only one assembly mov is needed for x86_64 and gp counter 0 on i386. The sequence of instructions to count events using the #GP and #Fixed counters is different. Thus the fix is quite high level, to use the same counter (w/ same instruction sequences) to set initial value for the same counter. Fix the expected init cnt.count for fixed counter 0 overflow based on the same fixed counter 0, not always using gp counter 0. The difference of 1 for this count enables the interrupt to be generated immediately after the selected event count has been reached, instead of waiting for the overflow to be propagation through the counter. Adding a helper to measure/compute the overflow preset value. It provides a convenient location to document the weird behavior that's necessary to ensure immediate event delivery. Signed-off-by: Like Xu <likexu@xxxxxxxxxxx> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- x86/pmu.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 0546eb13..ddbc0cf9 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -288,17 +288,30 @@ static void check_counters_many(void) report(i == n, "all counters"); } +static uint64_t measure_for_overflow(pmu_counter_t *cnt) +{ + __measure(cnt, 0); + /* + * To generate overflow, i.e. roll over to '0', the initial count just + * needs to be preset to the negative expected count. However, as per + * Intel's SDM, the preset count needs to be incremented by 1 to ensure + * the overflow interrupt is generated immediately instead of possibly + * waiting for the overflow to propagate through the counter. + */ + assert(cnt->count > 1); + return 1 - cnt->count; +} + static void check_counter_overflow(void) { int nr_gp_counters = pmu_nr_gp_counters(); - uint64_t count; + uint64_t overflow_preset; int i; pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; - __measure(&cnt, 0); - count = cnt.count; + overflow_preset = measure_for_overflow(&cnt); /* clear status before test */ wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); @@ -309,12 +322,13 @@ static void check_counter_overflow(void) uint64_t status; int idx; - cnt.count = 1 - count; + cnt.count = overflow_preset; if (gp_counter_base == MSR_IA32_PMC0) cnt.count &= (1ull << pmu_gp_counter_width()) - 1; if (i == nr_gp_counters) { cnt.ctr = fixed_events[0].unit_sel; + cnt.count = measure_for_overflow(&cnt); cnt.count &= (1ull << pmu_fixed_counter_width()) - 1; } -- 2.38.1.431.g37b22c650d-goog