On 3/7/2024 7:01 AM, Sean Christopherson wrote:
When using adaptive PEBS with LBR entries, verify that the LBR entries are all '0'. If KVM fails to context switch LBRs, e.g. when the guest isn't using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps when generating PEBS records, i.e. ignores KVM's interception of reads to LBR MSRs. This testcase is best exercised by simultaneously utilizing LBRs in the host, e.g. by running "perf record -b -e instructions", so that there is non-zero data in the LBR MSRs. Cc: Like Xu <like.xu.linux@xxxxxxxxx> Cc: Mingwei Zhang <mizhang@xxxxxxxxxx> Cc: Zhenyu Wang <zhenyuw@xxxxxxxxxxxxxxx> Cc: Zhang Xiong <xiong.y.zhang@xxxxxxxxx> Cc: Lv Zhiyuan <zhiyuan.lv@xxxxxxxxx> Cc: Dapeng Mi <dapeng1.mi@xxxxxxxxx> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- x86/pmu_pebs.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index 0e8d60c3..df8e736f 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive expected = pebs_idx_match && pebs_size_match && data_cfg_match; report(expected, "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); + if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) { + unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true); + struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset; + int i; + + for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) { + if (!pebs_lbrs[i].from && !pebs_lbrs[i].to) + continue; + + report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'", + i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info); + } + } cur_record = cur_record + pebs_record_size; count++; } while (expected && (void *)cur_record < (void *)ds->pebs_index);
Reviewed-by: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx>