Xiaoyao Li <xiaoyao.li@xxxxxxxxx> writes: > +/* > + * Note: for guest, feature split lock detection can only be enumerated through > + * MSR_IA32_CORE_CAPABILITIES bit. The FMS enumeration is unsupported. That comment is confusing at best. > + */ > +static inline bool guest_cpu_has_feature_sld(struct kvm_vcpu *vcpu) > +{ > + return vcpu->arch.core_capabilities & > + MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT; > +} > + > +static inline bool guest_cpu_sld_on(struct vcpu_vmx *vmx) > +{ > + return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT; > +} > + > +static inline void vmx_update_sld(struct kvm_vcpu *vcpu, bool on) > +{ > + /* > + * Toggle SLD if the guest wants it enabled but its been disabled for > + * the userspace VMM, and vice versa. Note, TIF_SLD is true if SLD has > + * been turned off. Yes, it's a terrible name. Instead of writing that useless blurb you could have written a patch which changes TIF_SLD to TIF_SLD_OFF to make it clear. > + */ > + if (sld_state == sld_warn && guest_cpu_has_feature_sld(vcpu) && > + on == test_thread_flag(TIF_SLD)) { > + sld_update_msr(on); > + update_thread_flag(TIF_SLD, !on); Of course you completely fail to explain why TIF_SLD needs to be fiddled with. > @@ -1188,6 +1217,10 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) > #endif > > vmx_set_host_fs_gs(host_state, fs_sel, gs_sel, fs_base, gs_base); > + > + vmx->host_sld_on = !test_thread_flag(TIF_SLD); This inverted storage is non-intuitive. What's wrong with simply reflecting the TIF_SLD state? > + vmx_update_sld(vcpu, guest_cpu_sld_on(vmx)); > + > vmx->guest_state_loaded = true; > } > > @@ -1226,6 +1259,9 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) > wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); > #endif > load_fixmap_gdt(raw_smp_processor_id()); > + > + vmx_update_sld(&vmx->vcpu, vmx->host_sld_on); > + vmx_prepare_switch_to_guest() is called via: kvm_arch_vcpu_ioctl_run() vcpu_run() vcpu_enter_guest() preempt_disable(); kvm_x86_ops.prepare_guest_switch(vcpu); but vmx_prepare_switch_to_host() is invoked at the very end of: kvm_arch_vcpu_ioctl_run() ..... vcpu_run() ..... vcpu_put() vmx_vcpu_put() vmx_prepare_switch_to_host(); That asymmetry does not make any sense without an explanation. What's even worse is that vmx_prepare_switch_to_host() is invoked with preemption enabled, so MSR state and TIF_SLD state can get out of sync on preemption/migration. > @@ -1946,9 +1992,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > > switch (msr_index) { > case MSR_TEST_CTRL: > - if (data) > + if (data & ~vmx_msr_test_ctrl_valid_bits(vcpu)) > return 1; > > + vmx->msr_test_ctrl = data; > + > + preempt_disable(); This preempt_disable/enable() lacks explanation as well. > + if (vmx->guest_state_loaded) > + vmx_update_sld(vcpu, guest_cpu_sld_on(vmx)); > + preempt_enable(); How is updating msr_test_ctrl valid if this is invoked from the IOCTL, i.e. host_initiated == true? That said, I also hate the fact that you export both the low level MSR function _and_ the state variable. Having all these details including the TIF mangling in the VMX code is just wrong. Thanks, tglx