On Thu, Oct 3, 2019 at 9:02 AM Moger, Babu <Babu.Moger@xxxxxxx> wrote: > > > > On 9/9/19 3:54 AM, Vitaly Kuznetsov wrote: > > Jim Mattson <jmattson@xxxxxxxxxx> writes: > > > >> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> wrote: > >> > >>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS: > >>> > >>> case MSR_IA32_XSS: > >>> if (!vmx_xsaves_supported() || > >>> (!msr_info->host_initiated && > >>> !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && > >>> guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))) > >>> return 1; > >>> /* > >>> * The only supported bit as of Skylake is bit 8, but > >>> * it is not supported on KVM. > >>> */ > >>> if (data != 0) > >>> return 1; > >>> > >>> > >>> we will probably need the same limitation for SVM, however, I'd vote for > >>> creating separate kvm_x86_ops->set_xss() implementations. > >> > >> I hope separate implementations are unnecessary. The allowed IA32_XSS > >> bits should be derivable from guest_cpuid_has() in a > >> vendor-independent way. Otherwise, the CPU vendors have messed up. :-) > >> > >> At present, we use the MSR-load area to swap guest/host values of > >> IA32_XSS on Intel (when the host and guest values differ), but it > >> seems to me that IA32_XSS and %xcr0 should be swapped at the same > >> time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially adds > >> an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-exit > >> for nVMX, but since the host currently sets IA32_XSS to 0 and we only > >> allow the guest to set IA32_XSS to 0, we can probably worry about this > >> later. > > > > Yea, I was suggesting to split implementations as a future proof but a > > comment like "This ought to be 0 for now" would also do) > > Hi, Anyone actively working on this? > > I was trying to expose xsaves on AMD qemu guest. Found out that we need to > get this above code working before I can expose xsaves on guest. I can > re-post these patches if it is ok. > > Sorry, I dont have the complete background. From what I understood, we > need to add the same check as Intel for MSR_IA32_XSS in get_msr and set_msr. > > static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > { > .. > > case MSR_IA32_XSS: > if (!vmx_xsaves_supported() || > (!msr_info->host_initiated && > !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && > guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))) > return 1; > msr_info->data = vcpu->arch.ia32_xss; > break; > .. > .. > } > > static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > { > .. > .. > case MSR_IA32_XSS: > if (!vmx_xsaves_supported() || > (!msr_info->host_initiated && > !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && > guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))) > return 1; > /* > * The only supported bit as of Skylake is bit 8, but > * it is not supported on KVM. > */ > if (data != 0) > return 1; > vcpu->arch.ia32_xss = data; > if (vcpu->arch.ia32_xss != host_xss) > add_atomic_switch_msr(vmx, MSR_IA32_XSS, > vcpu->arch.ia32_xss, host_xss, false); > else > clear_atomic_switch_msr(vmx, MSR_IA32_XSS); > break; > ... > } > > We probably don't need last "if & else" check as we are setting it 0 now. > Does this look accurate? Aaron is working on it, and I think he's close to being ready to send out the next revision.