When running KVM's fast path it is possible to get into a situation where the PMU event filter is dereferenced without grabbing KVM's SRCU read lock. The following callstack demonstrates how that is possible. Call Trace: dump_stack+0x85/0xdf lockdep_rcu_suspicious+0x109/0x120 pmc_event_is_allowed+0x165/0x170 kvm_pmu_trigger_event+0xa5/0x190 handle_fastpath_set_msr_irqoff+0xca/0x1e0 svm_vcpu_run+0x5c3/0x7b0 [kvm_amd] vcpu_enter_guest+0x2108/0x2580 Fix that by explicitly grabbing the read lock before dereferencing the PMU event filter. Fixes: dfdeda67ea2d ("KVM: x86/pmu: Prevent the PMU from counting disallowed events") Signed-off-by: Aaron Lewis <aaronlewis@xxxxxxxxxx> --- arch/x86/kvm/pmu.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index bf653df86112..2b2247f74ab7 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -381,18 +381,29 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_x86_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; + bool allowed; + int idx; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) return false; + idx = srcu_read_lock(&kvm->srcu); + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (!filter) - return true; + if (!filter) { + allowed = true; + goto out; + } if (pmc_is_gp(pmc)) - return is_gp_event_allowed(filter, pmc->eventsel); + allowed = is_gp_event_allowed(filter, pmc->eventsel); + else + allowed = is_fixed_event_allowed(filter, pmc->idx); + +out: + srcu_read_unlock(&kvm->srcu, idx); - return is_fixed_event_allowed(filter, pmc->idx); + return allowed; } static bool pmc_event_is_allowed(struct kvm_pmc *pmc) -- 2.41.0.178.g377b9f9a00-goog