Re: [PATCH 2/2] KVM: x86: Add kvm_x86_ops callback to allow VMX to stash away CR4.VMXE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 27, 2019 at 01:49:22PM -0700, Jim Mattson wrote:
> On Wed, Mar 27, 2019 at 1:34 PM Sean Christopherson
> <sean.j.christopherson@xxxxxxxxx> wrote:
> >
> > On Wed, Mar 27, 2019 at 01:28:09PM -0700, Jim Mattson wrote:
> > > On Wed, Mar 27, 2019 at 1:20 PM Sean Christopherson
> > > <sean.j.christopherson@xxxxxxxxx> wrote:
> > > >
> > > > On Wed, Mar 27, 2019 at 12:50:27PM -0700, Jim Mattson wrote:
> > > > > Where does this state get saved/restored when suspending/resuming a
> > > > > VCPU that's in SMM?
> > > >
> > > > cr4_vmxe is handled by vmx_pre_smi_save_state() and
> > > > vmx_post_rsm_load_state().
> > > >
> > > > The existing vmxon and guest_mode are handled by post_smi_save_state()
> > > > and pre_rsm_load_state(), which were previously pre_{enter,leave}_smm().
> > > >
> > > > Ideally everything in the struct would be managed together, e.g. in
> > > > vmx_pre_smi_save_state() and vmx_post_rsm_load_state(), but I didn't
> > > > want to touch that house of cards at this time.
> > > >
> > > > And even more ideally, post_smi_save_state() and pre_rsm_load_state()
> > > > would be dropped altogether, e.g. the SVM code would also be reworked to
> > > > use pre_smi_save_state() and post_rsm_load_state(), but my SVM knowledge
> > > > is nowhere near sufficient to judge if that's even remotely feasible.
> > >
> > > No; I meant which of the myriad KVM_GET_* and KVM_SET_* ioctls provide
> > > a mechanism to save and restore these tidbits of vCPU state (e.g. for
> > > live migration)?
> >
> > Ah fudge.  KVM_{GET,SET}_NESTED_STATE, i.e. vmx_{get,set}_nested_state().
> > This patch would also need to add and handle KVM_STATE_NESTED_SMM_CR4_VMXE.
> 
> Sorry. I haven't been following the evolution of those ioctls since I
> first proposed them and Paolo shut me down. I see the handling of
> "nested SMM state" now, and I agree that this would be the place to
> handle the internal stash of CR4.VMXE. However, vmx_set_nested_state
> currently allows only two bits to be set in vmx.smm.flags:
> KVM_STATE_NESTED_SMM_GUEST_MODE and KVM_STATE_NESTED_SMM_VMXON. Does
> userspace have to opt-in to KVM_STATE_NESTED_SMM_CR4_VMXE through a
> new KVM_CAP?

Hmm.  I think we could squeak by without it on a technicality.  Nested
VMX was only enabled by default after the bug was introduced.  Any
migration from an older KVM that "officially" supports nested VMX will
crash the guest ater migration regardless of the target KVM, since KVM
will signal a #GP on RSM due to CR4.VMXE=1 in the SMM save state area.
I.e. it wouldn't break anything that wasn't already broken.

Toggling HF_SMM_MASK wouldn't suffer the same fate, i.e. new KVMs would
happily accept migration from broken KVMs.  That being said, the only
way that can happen is if migration is triggered during the first SMI
with CR4.VMXE=1, since the guest will have already crashed due to RSM
taking a #GP in the broken pre-migration KVM.  Given this is the first
bug report for the RSM w/ CR4.VMXE=1, I'm going to go out on a limb and
say that the migration corner case is extremely unlikely, to put it mildly.

So I think we're good?  Maybe add an errata in the docs to note the
extremely unlikely corner case?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux