On Fri, Mar 07, 2025, Dan Carpenter wrote: > Hello Sean Christopherson, > > Commit 636e8b733491 ("KVM: VMX: Use GPA legality helpers to replace > open coded equivalents") from Feb 3, 2021 (linux-next), leads to the > following Smatch static checker warning: > > arch/x86/kvm/vmx/nested.c:834 nested_vmx_check_msr_switch() > warn: potential user controlled sizeof overflow 'addr + count * 16' '0-u64max + 16-68719476720' > > arch/x86/kvm/vmx/nested.c > 827 static int nested_vmx_check_msr_switch(struct kvm_vcpu *vcpu, > 828 u32 count, u64 addr) > 829 { > 830 if (count == 0) > 831 return 0; > 832 > 833 if (!kvm_vcpu_is_legal_aligned_gpa(vcpu, addr, 16) || > --> 834 !kvm_vcpu_is_legal_gpa(vcpu, (addr + count * sizeof(struct vmx_msr_entry) - 1))) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Do we support kvm on 32bit systems? "Support" might be a bit of a strong word, but yes, KVM is supposed to work on 32-bit systems. Even if we ignore 32-bit, not explicitly checking the count is silly. There's an explicit limit on @count, and it's architecturally capped at 4096. I suspect the only reason KVM doesn't check it here is because exceeding the limit isn't listed in the SDM as consitency check. But the SDM does say that exceeding the limit results in undefined behavior, "(including a machine check during the VMX transition)". I.e. KVM can quite literally do whatever it wants. And KVM does check the limit later on. I can't think of any ordering weirdness that would result in an early check, so assuming testing comes through clean, I'll post this: diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d06e50d9c0e7..64ea387a14a1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -824,12 +824,30 @@ static int nested_vmx_check_apicv_controls(struct kvm_vcpu *vcpu, return 0; } +static u32 nested_vmx_max_atomic_switch_msrs(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + u64 vmx_misc = vmx_control_msr(vmx->nested.msrs.misc_low, + vmx->nested.msrs.misc_high); + + return (vmx_misc_max_msr(vmx_misc) + 1) * VMX_MISC_MSR_LIST_MULTIPLIER; +} + static int nested_vmx_check_msr_switch(struct kvm_vcpu *vcpu, u32 count, u64 addr) { if (count == 0) return 0; + /* + * Exceeding the limit results in architecturally _undefined_ behavior, + * i.e. KVM is allowed to do literally anything in response to a bad + * limit. Immediately generate a consistency check so that code that + * consumes the count doesn't need to worry about extreme edge cases. + */ + if (count > nested_vmx_max_atomic_switch_msrs(vcpu)) + return -EINVAL; + if (!kvm_vcpu_is_legal_aligned_gpa(vcpu, addr, 16) || !kvm_vcpu_is_legal_gpa(vcpu, (addr + count * sizeof(struct vmx_msr_entry) - 1))) return -EINVAL; @@ -940,15 +958,6 @@ static int nested_vmx_store_msr_check(struct kvm_vcpu *vcpu, return 0; } -static u32 nested_vmx_max_atomic_switch_msrs(struct kvm_vcpu *vcpu) -{ - struct vcpu_vmx *vmx = to_vmx(vcpu); - u64 vmx_misc = vmx_control_msr(vmx->nested.msrs.misc_low, - vmx->nested.msrs.misc_high); - - return (vmx_misc_max_msr(vmx_misc) + 1) * VMX_MISC_MSR_LIST_MULTIPLIER; -} - /* * Load guest's/host's msr at nested entry/exit. * return 0 for success, entry index for failure. @@ -965,7 +974,7 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) u32 max_msr_list_size = nested_vmx_max_atomic_switch_msrs(vcpu); for (i = 0; i < count; i++) { - if (unlikely(i >= max_msr_list_size)) + if (WARN_ON_ONCE(i >= max_msr_list_size)) goto fail; if (kvm_vcpu_read_guest(vcpu, gpa + i * sizeof(e), @@ -1053,7 +1062,7 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) u32 max_msr_list_size = nested_vmx_max_atomic_switch_msrs(vcpu); for (i = 0; i < count; i++) { - if (unlikely(i >= max_msr_list_size)) + if (WARN_ON_ONCE(i >= max_msr_list_size)) return -EINVAL; if (!read_and_check_msr_entry(vcpu, gpa, i, &e))