On Mon, Oct 14, 2019 at 12:05 PM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Sat, Oct 12, 2019 at 10:36:15AM -0700, Jim Mattson wrote: > > On Fri, Oct 11, 2019 at 5:18 PM Sean Christopherson > > <sean.j.christopherson@xxxxxxxxx> wrote: > > > > > > On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote: > > > > Set IA32_XSS for the guest and host during VM Enter and VM Exit > > > > transitions rather than by using the MSR-load areas. > > > > > > > > By moving away from using the MSR-load area we can have a unified > > > > solution for both AMD and Intel. > > > > > > > > Reviewed-by: Jim Mattson <jmattson@xxxxxxxxxx> > > > > Signed-off-by: Aaron Lewis <aaronlewis@xxxxxxxxxx> > > > > --- > > > > arch/x86/include/asm/kvm_host.h | 1 + > > > > arch/x86/kvm/svm.c | 7 +++++-- > > > > arch/x86/kvm/vmx/vmx.c | 22 ++++++++++------------ > > > > arch/x86/kvm/x86.c | 23 +++++++++++++++++++---- > > > > arch/x86/kvm/x86.h | 4 ++-- > > > > 5 files changed, 37 insertions(+), 20 deletions(-) > > > > > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > > > index 50eb430b0ad8..634c2598e389 100644 > > > > --- a/arch/x86/include/asm/kvm_host.h > > > > +++ b/arch/x86/include/asm/kvm_host.h > > > > @@ -562,6 +562,7 @@ struct kvm_vcpu_arch { > > > > u64 smbase; > > > > u64 smi_count; > > > > bool tpr_access_reporting; > > > > + bool xsaves_enabled; > > > > u64 ia32_xss; > > > > u64 microcode_version; > > > > u64 arch_capabilities; > > > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > > > > index f8ecb6df5106..da69e95beb4d 100644 > > > > --- a/arch/x86/kvm/svm.c > > > > +++ b/arch/x86/kvm/svm.c > > > > @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > > > > svm->vmcb->save.cr2 = vcpu->arch.cr2; > > > > > > > > clgi(); > > > > - kvm_load_guest_xcr0(vcpu); > > > > + kvm_load_guest_xsave_controls(vcpu); > > > > > > > > if (lapic_in_kernel(vcpu) && > > > > vcpu->arch.apic->lapic_timer.timer_advance_ns) > > > > @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > > > > if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) > > > > kvm_before_interrupt(&svm->vcpu); > > > > > > > > - kvm_put_guest_xcr0(vcpu); > > > > + kvm_load_host_xsave_controls(vcpu); > > > > stgi(); > > > > > > > > /* Any pending NMI will happen here */ > > > > @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu) > > > > { > > > > struct vcpu_svm *svm = to_svm(vcpu); > > > > > > > > + vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && > > > > + boot_cpu_has(X86_FEATURE_XSAVES); > > > > > > This looks very much like a functional change to SVM, which feels wrong > > > for a patch with a subject of "KVM: VMX: Use wrmsr for switching between > > > guest and host IA32_XSS" feels wrong. Shouldn't this be unconditionally > > > set false in this patch, and then enabled in " kvm: svm: Add support for > > > XSAVES on AMD"? > > > > Nothing is being enabled here. Vcpu->arch.xsaves_enabled simply tells > > us whether or not the guest can execute the XSAVES instruction. Any > > guest with the ability to set CR4.OSXSAVE on an AMD host that supports > > XSAVES can use the instruction. > > Not enabling per se, but it's a functional change as it means MSR_IA32_XSS > will be written in kvm_load_{guest,host}_xsave_controls() if host_xss!=0. Fortunately, host_xss is guaranteed to be zero for the nonce. :-) Perhaps the commit message just needs to be updated? The only alternatives I see are: 1. Deliberately introducing buggy code to be removed later in the series, or 2. Introducing SVM-specific code first, to be removed later in the series.