On Tue, Oct 12 2021 at 19:36, Paolo Bonzini wrote: > On 12/10/21 02:00, Thomas Gleixner wrote: >> >> - if (boot_cpu_has(X86_FEATURE_XSAVE)) { >> - memset(guest_xsave, 0, sizeof(struct kvm_xsave)); >> - fill_xsave((u8 *) guest_xsave->region, vcpu); >> - } else { >> - memcpy(guest_xsave->region, >> - &vcpu->arch.guest_fpu->state.fxsave, >> - sizeof(struct fxregs_state)); >> - *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = >> - XFEATURE_MASK_FPSSE; >> - } > > After the patch, this final assignment is not done in the else case: Doh. >> + >> + if (cpu_feature_enabled(X86_FEATURE_XSAVE)) { >> + __copy_xstate_to_uabi_buf(mb, &kstate->xsave, pkru, >> + XSTATE_COPY_XSAVE); >> + } else { >> + memcpy(&ustate->fxsave, &kstate->fxsave, sizeof(ustate->fxsave)); >> + } >> +} > > This leaves the xstate_bv set to 0 instead of XFEATURE_MASK_FPSSE. > Resuming a VM then fails if you save on a non-XSAVE machine and restore > it on an XSAVE machine. Yup. > The memset(guest_xsave, 0, sizeof(struct kvm_xsave)) also is not > reproduced, you can make it unconditional for simplicity; this is not a > fast path. Duh, I should have mentioned that in the changelog. The buffer is allocated with kzalloc() soe the memset is redundant, right? Thanks, tglx