Paolo Bonzini <pbonzini@xxxxxxxxxx> writes: > Commit 7e2175ebd695 ("KVM: x86: Fix recording of guest steal time > / preempted status", 2021-11-11) open coded the previous call to > kvm_map_gfn, but in doing so it dropped the comparison between the cached > guest physical address and the one in the MSR. This cause an incorrect > cache hit if the guest modifies the steal time address while the memslots > remain the same. This can happen with kexec, in which case the preempted > bit is written at the address used by the old kernel instead of > the old one. > > Cc: David Woodhouse <dwmw@xxxxxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > Fixes: 7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status") > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> No need to S-o-b twice) > --- > arch/x86/kvm/x86.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 0f3c2e034740..8ee4698cb90a 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -4715,6 +4715,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) > struct kvm_steal_time __user *st; > struct kvm_memslots *slots; > static const u8 preempted = KVM_VCPU_PREEMPTED; > + gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS; > > /* > * The vCPU can be marked preempted if and only if the VM-Exit was on > @@ -4742,6 +4743,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) > slots = kvm_memslots(vcpu->kvm); > > if (unlikely(slots->generation != ghc->generation || > + gpa != ghc->gpa || > kvm_is_error_hva(ghc->hva) || !ghc->memslot)) (We could probably have a common helper for both these places.) > return; Reviewed-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> -- Vitaly