Re: [PATCH] KVM: x86/pmu: SRCU protect the PMU event filter in the fast path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 23, 2023, Aaron Lewis wrote:
> When running KVM's fast path it is possible to get into a situation
> where the PMU event filter is dereferenced without grabbing KVM's SRCU
> read lock.
> 
> The following callstack demonstrates how that is possible.
> 
> Call Trace:
>   dump_stack+0x85/0xdf
>   lockdep_rcu_suspicious+0x109/0x120
>   pmc_event_is_allowed+0x165/0x170
>   kvm_pmu_trigger_event+0xa5/0x190
>   handle_fastpath_set_msr_irqoff+0xca/0x1e0
>   svm_vcpu_run+0x5c3/0x7b0 [kvm_amd]
>   vcpu_enter_guest+0x2108/0x2580
> 
> Fix that by explicitly grabbing the read lock before dereferencing the
> PMU event filter.

Actually, on second thought, I think it would be better to acquire kvm->srcu in
handle_fastpath_set_msr_irqoff().  This is the second time that invoking
kvm_skip_emulated_instruction() resulted in an SRCU violation, and it probably
won't be the last since one of the benefits of using SRCU instead of per-asset
locks to protect things like memslots and filters is that low(ish) level helpers
don't need to worry about acquiring locks.

The 2x LOCK ADD from smp_mb() is unfortunate, but IMO it's worth eating that cost
to avoid having to play whack-a-mole in the future.  And as a (very small) bonus,
commit 5c30e8101e8d can be reverted.

--
From: Sean Christopherson <seanjc@xxxxxxxxxx>
Date: Fri, 23 Jun 2023 08:19:51 -0700
Subject: [PATCH] KVM: x86: Acquire SRCU read lock when handling fastpath MSR
 writes

Temporarily acquire kvm->srcu for read when potentially emulating WRMSR in
the VM-Exit fastpath handler, as several of the common helpers used during
emulation expect the caller to provide SRCU protection.  E.g. if the guest
is counting instructions retired, KVM will query the PMU event filter when
stepping over the WRMSR.

  dump_stack+0x85/0xdf
  lockdep_rcu_suspicious+0x109/0x120
  pmc_event_is_allowed+0x165/0x170
  kvm_pmu_trigger_event+0xa5/0x190
  handle_fastpath_set_msr_irqoff+0xca/0x1e0
  svm_vcpu_run+0x5c3/0x7b0 [kvm_amd]
  vcpu_enter_guest+0x2108/0x2580

Alternatively, check_pmu_event_filter() could acquire kvm->srcu, but this
isn't the first bug of this nature, e.g. see commit 5c30e8101e8d ("KVM:
SVM: Skip WRMSR fastpath on VM-Exit if next RIP isn't valid").  Providing
protection for the entirety of WRMSR emulation will allow reverting the
aforementioned commit, and will avoid having to play whack-a-mole when new
uses of SRCU-protected structures are inevitably added in common emulation
helpers.

Fixes: dfdeda67ea2d ("KVM: x86/pmu: Prevent the PMU from counting disallowed events")
Reported-by: Aaron Lewis <aaronlewis@xxxxxxxxxx>
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
 arch/x86/kvm/x86.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 439312e04384..5f220c04624e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2172,6 +2172,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 	u64 data;
 	fastpath_t ret = EXIT_FASTPATH_NONE;
 
+	kvm_vcpu_srcu_read_lock(vcpu);
+
 	switch (msr) {
 	case APIC_BASE_MSR + (APIC_ICR >> 4):
 		data = kvm_read_edx_eax(vcpu);
@@ -2194,6 +2196,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 	if (ret != EXIT_FASTPATH_NONE)
 		trace_kvm_msr_write(msr, data);
 
+	kvm_vcpu_srcu_read_unlock(vcpu);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);

base-commit: 88bb466c9dec4f70d682cf38c685324e7b1b3d60
-- 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux