RE: [PATCH 1/1] KVM: X86: add the support of XSAVE/XRSTOR to guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
> On 05/06/2010 07:23 AM, Cui, Dexuan wrote:
>> 
>>>> +		goto err;
>>>> +	vcpu->arch.guest_xstate_mask = new_bv;
>>>> +	xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.guest_xstate_mask);
>>>> 
>>>> 
>>> Can't we run with the host xcr0?  isn't it guaranteed to be a
>>> superset of the guest xcr0? 
>>> 
>> I agree host xcr0 is a superset of guest xcr0.
>> In the case guest xcr0 has less bits set  than host xcr0, I suppose
>> running with guest xcr0 would be a bit faster.
> 
> However, switching xcr0 may be slow.  That's our experience with msrs.
> Can you measure its latency?
We can measure that.
However, I think the changing xcr0 to guest xcr0 in handle_xsetbv() is necessary --
or else, inside guest xgetbv() would return host xcr0 rather than guest xcr0 -- 
this is obviously incorrect. Once handle_xsetbv() changes the xcr0 to guest's value,
the xsetbv() in kvm_fx_restore_host() is also necessary, and the xsetbv() in
kvm_fx_restore_guest() is also necessary. So looks guest can't run with the
host xcr0.

 
> Can also be avoided if guest and host xcr0 match.
> 
>> If you think simplying the code by removing the field
>> guest_xstate_mask is more important, we can do it.
>> 
> 
> Well we need guest_xstate_mask, it's the guest's xcr0, no?
I misread it. Yes, we need geust_xsave_mask.

> 
> btw, it needs save/restore for live migration, as well as save/restore
> for the new fpu state.
Yes. This part is missing. Sheng and I is also doing this -- it may be a bittle
troublesome as the number of XSTATEs can grown as time goes on. We'll
have to handle the compatibility issue.

> 
>>>> +	skip_emulated_instruction(vcpu);
>>>> +	return 1;
>>>> +err:
>>>> +	kvm_inject_gp(vcpu, 0);
>>>> 
>>>> 
>>> Need to #UD in some circumstances.
>>> 
>> I don't think so. When the CPU doesn't suport XSAVE, or guest
>> doesn't set guest CR4.OSXSAVE, guest's attempt of exectuing xsetbv
>> would get a #UD immediately 
>> and no VMexit would occur.
>> 
> 
> Ok.
> 
>>>> @@ -62,6 +64,7 @@
>>>>    	(~(unsigned long)(X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD |
>>>>    			  X86_CR4_DE\ | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_MCE	\
>>>>    			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR	\
>>>> +			  | (cpu_has_xsave ? X86_CR4_OSXSAVE : 0)	\
>>>>    			  | X86_CR4_OSXMMEXCPT | X86_CR4_VMXE))
>>>> 
>>>> 
>>> It also depends on the guest cpuid value.  Please add it outside the
>>> 
>> If cpu_has_xsave is true, we always present xsave to guest via
>> cpuid, so I 
>> think the code is correct here.
>> 
> 
> We don't pass all host cpuid values to the guest.  We expose them to
> userspace via KVM_GET_SUPPORTED_CPUID, and then userspace decides what
> cpuid to present to the guest via KVM_SET_CPUID2.  So the guest may
> run with fewer features than the host.
Sorry, I didn't notice this. Will look into this.

> 
>> I saw the 2 patches you sent. They (like the new APIs
>> fpu_alloc/fpu_free) will simplify the implementation of kvm xsave
>> support. Thanks a lot! 
>> 
> 
> Thanks.  To simplify things, please separate host xsave support
> (switching to the fpu api) and guest xsave support (cpuid, xsetbv, new
> ioctls) into separate patches.  In fact, guest xsave support is
> probably best split into patches as well.
Ok. Will try to do these.

Thanks,
-- Dexuan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux