Li RongQing <lirongqing@xxxxxxxxx> writes: > check steal time address when enable steal time, do not update > arch.st.msr_val if the address is invalid, and return in #GP > > this can avoid unnecessary write/read invalid memory when guest > is running > > Signed-off-by: Li RongQing <lirongqing@xxxxxxxxx> > --- > arch/x86/kvm/x86.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index eb402966..3ed0949 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -3616,6 +3616,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > if (data & KVM_STEAL_RESERVED_MASK) > return 1; > > + if (!kvm_vcpu_gfn_to_memslot(vcpu, data >> PAGE_SHIFT)) > + return 1; > + What about we use stronger kvm_is_visible_gfn() instead? I didn't put much thought to what's going to happen if we put e.g. APIC access page addr to the MSR, let's just cut any possibility. > vcpu->arch.st.msr_val = data; > > if (!(data & KVM_MSR_ENABLED)) -- Vitaly