Re: [PATCH] KVM: x86: XSAVE/XRSTOR live migration support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday 27 May 2010 18:02:31 Avi Kivity wrote:
> On 05/27/2010 12:48 PM, Sheng Yang wrote:
> > This patch enable save/restore of xsave state.
> > 
> > Signed-off-by: Sheng Yang<sheng@xxxxxxxxxxxxxxx>
> > ---
> > 
> >   arch/x86/include/asm/kvm.h |   29 ++++++++++++++++
> >   arch/x86/kvm/x86.c         |   79
> >   ++++++++++++++++++++++++++++++++++++++++++++ include/linux/kvm.h      
> >    |    6 +++
> 
> Documentation/kvm/api.txt +++++++++++++
> 
> > +/* for KVM_CAP_XSAVE */
> > +struct kvm_xsave {
> > +	struct {
> > +		__u16 cwd;
> > +		__u16 swd;
> > +		__u16 twd;
> > +		__u16 fop;
> > +		__u64 rip;
> > +		__u64 rdp;
> > +		__u32 mxcsr;
> > +		__u32 mxcsr_mask;
> > +		__u32 st_space[32];
> > +		__u32 xmm_space[64];
> > +		__u32 padding[12];
> > +		__u32 sw_reserved[12];
> > +	} i387;
> > +	struct {
> > +		__u64 xstate_bv;
> > +		__u64 reserved1[2];
> > +		__u64 reserved2[5];
> > +	} xsave_hdr;
> > +	struct {
> > +		__u32 ymmh_space[64];
> > +	} ymmh;
> > +	__u64 xcr0;
> > +	__u32 padding[256];
> > +};
> 
> Need to reserve way more space here for future xsave growth.  I think at
> least 4K.  LRB wa 32x512bit = 1K (though it probably isn't a candidate
> for vmx).  Would be good to get an opinion from your processor architects.
> 
> I don't think we need to detail the contents of the structures since
> they're described by the SDM; so we can have just a large array that is
> 1:1 with the xsave as saved by the fpu.

I think we can reserve one page here. But one big array make it harder to work 
with QEmu CPUState. Do we need lots of marcos in QEmu to parse the array? Also 
it's easier to transfer get/set_fpu to get/set_xsave interface using current 
structure I think.

--
regards
Yang, Sheng

> 
> If we do that then xcr0 needs to be in a separate structure, say
> kvm_xcr, with a flags field and reserved space of its own for future xcr
> growth.
> 
> > @@ -2363,6 +2366,59 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct
> > kvm_vcpu *vcpu,
> > 
> >   	return 0;
> >   
> >   }
> > 
> > +static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> > +					struct kvm_xsave *guest_xsave)
> > +{
> > +	struct xsave_struct *xsave =&vcpu->arch.guest_fpu.state->xsave;
> > +
> > +	if (!cpu_has_xsave)
> > +		return;
> 
> Hm, it would be nice to make it backward compatible and return the
> legacy fpu instead.  I think the layouts are compatible?
> 
> > +
> > +	guest_xsave->i387.cwd = xsave->i387.cwd;
> > +	guest_xsave->i387.swd = xsave->i387.swd;
> > +	guest_xsave->i387.twd = xsave->i387.twd;
> > +	guest_xsave->i387.fop = xsave->i387.fop;
> > +	guest_xsave->i387.rip = xsave->i387.rip;
> > +	guest_xsave->i387.rdp = xsave->i387.rdp;
> > +	memcpy(guest_xsave->i387.st_space, xsave->i387.st_space, 128);
> > +	memcpy(guest_xsave->i387.xmm_space, xsave->i387.xmm_space,
> > +			sizeof guest_xsave->i387.xmm_space);
> > +
> > +	guest_xsave->xsave_hdr.xstate_bv = xsave->xsave_hdr.xstate_bv;
> > +	memcpy(guest_xsave->ymmh.ymmh_space, xsave->ymmh.ymmh_space,
> > +			sizeof xsave->ymmh.ymmh_space);
> 
> And we can do a big memcpy here.  But we need to limit it to what the
> host actually allocated.
> 
> > +
> > +	guest_xsave->xcr0 = vcpu->arch.xcr0;
> > +}
> > +
> > 
> >   long kvm_arch_vcpu_ioctl(struct file *filp,
> >   
> >   			 unsigned int ioctl, unsigned long arg)
> >   
> >   {
> > 
> > @@ -2564,6 +2620,29 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
> > 
> >   		r = kvm_vcpu_ioctl_x86_set_debugregs(vcpu,&dbgregs);
> >   		break;
> >   	
> >   	}
> > 
> > +	case KVM_GET_XSAVE: {
> > +		struct kvm_xsave xsave;
> 
> Too big for stack (especially if we reserve room for growth).
> 
> > diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> > index 23ea022..5006761 100644
> > --- a/include/linux/kvm.h
> > +++ b/include/linux/kvm.h
> > @@ -524,6 +524,9 @@ struct kvm_enable_cap {
> > 
> >   #define KVM_CAP_PPC_OSI 52
> >   #define KVM_CAP_PPC_UNSET_IRQ 53
> >   #define KVM_CAP_ENABLE_CAP 54
> > 
> > +#ifdef __KVM_HAVE_XSAVE
> > +#define KVM_CAP_XSAVE 55
> > +#endif
> 
> Might make sense to have a separate KVM_CAP_XCR, just for consistency.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux