On Fri, Oct 18, 2019 at 09:58:02AM +0800, Yang Weijiang wrote: > On Thu, Oct 17, 2019 at 12:56:42PM -0700, Sean Christopherson wrote: > > On Wed, Oct 02, 2019 at 12:05:23PM -0700, Jim Mattson wrote: > > > > + u64 kvm_xss = kvm_supported_xss(); > > > > + > > > > + best->ebx = > > > > + xstate_required_size(vcpu->arch.xcr0 | kvm_xss, true); > > > > > > Shouldn't this size be based on the *current* IA32_XSS value, rather > > > than the supported IA32_XSS bits? (i.e. > > > s/kvm_xss/vcpu->arch.ia32_xss/) > > > > Ya. > > > I'm not sure if I understand correctly, kvm_xss is what KVM supports, > but arch.ia32_xss reflects what guest currently is using, shoudn't CPUID > report what KVM supports instead of current status? > Will CPUID match current IA32_XSS status if guest changes it runtime? Not in this case. Select CPUID output is dependent on current state as opposed to being a constant defind by hardware. Per the SDM, EBX is: The size in bytes of the XSAVE area containing all states enabled by XCRO | IA32_XSS Since KVM is emulating CPUID for the guest, XCR0 and IA32_XSS in this context refers to the guest's current/actual XCR0/IA32_XSS values. The purpose of this behavior is so that software can call CPUID to query the actual amount of memory that is needed for XSAVE(S), as opposed to the absolute max size that _might_ be needed. MONITOR/MWAIT is the other case that comes to mind where CPUID dynamically reflects configured state, e.g. MWAIT is reported as unsupported if it's disabled via IA32_MISC_ENABLE MSR.