On Thu, Mar 05, 2020 at 10:16:40PM +0800, Xiaoyao Li wrote: > On 3/4/2020 3:30 AM, Sean Christopherson wrote: > >On Thu, Feb 06, 2020 at 03:04:12PM +0800, Xiaoyao Li wrote: > >>--- a/arch/x86/kvm/vmx/vmx.c > >>+++ b/arch/x86/kvm/vmx/vmx.c > >>@@ -1781,6 +1781,25 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr) > >> } > >> } > >>+/* > >>+ * Note: for guest, feature split lock detection can only be enumerated through > >>+ * MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT bit. The FMS enumeration is invalid. > >>+ */ > >>+static inline bool guest_has_feature_split_lock_detect(struct kvm_vcpu *vcpu) > >>+{ > >>+ return vcpu->arch.core_capabilities & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT; > >>+} > >>+ > >>+static inline u64 vmx_msr_test_ctrl_valid_bits(struct kvm_vcpu *vcpu) > >>+{ > >>+ u64 valid_bits = 0; > >>+ > >>+ if (guest_has_feature_split_lock_detect(vcpu)) > >>+ valid_bits |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT; > >>+ > >>+ return valid_bits; > >>+} > >>+ > >> /* > >> * Reads an msr value (of 'msr_index') into 'pdata'. > >> * Returns 0 on success, non-0 otherwise. > >>@@ -1793,6 +1812,12 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > >> u32 index; > >> switch (msr_info->index) { > >>+ case MSR_TEST_CTRL: > >>+ if (!msr_info->host_initiated && > >>+ !guest_has_feature_split_lock_detect(vcpu)) > >>+ return 1; > >>+ msr_info->data = vmx->msr_test_ctrl; > >>+ break; > >> #ifdef CONFIG_X86_64 > >> case MSR_FS_BASE: > >> msr_info->data = vmcs_readl(GUEST_FS_BASE); > >>@@ -1934,6 +1959,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > >> u32 index; > >> switch (msr_index) { > >>+ case MSR_TEST_CTRL: > >>+ if (!msr_info->host_initiated && > > > >Host initiated writes need to be validated against > >kvm_get_core_capabilities(), otherwise userspace can enable SLD when it's > >supported in hardware and the kernel, but can't be safely exposed to the > >guest due to SMT being on. > > How about making the whole check like this: > > if (!msr_info->host_initiated && > (!guest_has_feature_split_lock_detect(vcpu)) > return 1; > > if (data & ~vmx_msr_test_ctrl_valid_bits(vcpu)) Whoops, the check on kvm_get_core_capabilities() should be done in "case MSR_IA32_CORE_CAPS:", i.e. KVM shouldn't let host userspace advertise split-lock support unless it's allowed by KVM. Then this code doesn't need to do a check on host_initiated=true. Back to the original code, I don't think we need to make the existence of MSR_TEST_CTRL dependent on guest_has_feature_split_lock_detect(), i.e. this check can simply be: if (!msr_info->host_initiated && (data & ~vmx_msr_test_ctrl_valid_bits(vcpu))) return 1; and vmx_get_msr() doesn't need to check anything, i.e. RDMSR always succeeds. This is actually aligned with real silicon behavior because MSR_TEST_CTRL exists on older processors, it's just wasn't documented until we decided to throw in SPLIT_LOCK_AC, e.g. the LOCK# suppression bit is marked for deprecation in the SDM, which wouldn't be necessary if it didn't exist :-) Intel ISA/Feature Year of Removal TEST_CTRL MSR, bit 31 (MSR address 33H) 2019 onwards 31 Disable LOCK# assertion for split locked access On my Haswell box: $ rdmsr 0x33 0 $ wrmsr 0x33 0x20000000 wrmsr: CPU 0 cannot set MSR 0x00000033 to 0x0000000020000000 $ wrmsr 0x33 0x80000000 $ rdmsr 0x33 80000000 $ wrmsr 0x33 0x00000000 $ rdmsr 0x33 0 That way the guest_has_feature_split_lock_detect() helper isn't needed since its only user is vmx_msr_test_ctrl_valid_bits(), i.e. it can be open coded there. > >>+ (!guest_has_feature_split_lock_detect(vcpu) || > >>+ data & ~vmx_msr_test_ctrl_valid_bits(vcpu))) > >>+ return 1; > >>+ vmx->msr_test_ctrl = data; > m>+ break;