Re: [PATCH v2 3/6] KVM: nVMX: fix SMI injection in guest mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 20, 2017 at 7:02 PM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
> On 20/09/2017 15:05, Ladi Prosek wrote:
>> On Wed, Sep 20, 2017 at 12:14 PM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
>>> On 20/09/2017 10:52, Ladi Prosek wrote:
>>>>> Therefore, storing such hidden state or a subset of it in
>>>>> vmx->nested.smm is merely a convenience that lets you keep
>>>>> vmx->nested.vmxon == false while in SMM.  This is different from nSVM,
>>>>> and I think it's an acceptable difference for the sake of convenience.
>>>> So for nSVM, you'd save to and restore from the state save area? I can
>>>> certainly do that but it would be hard to guarantee that KVM won't
>>>> blow up if the SMM handler pokes at these fields. It feels safer to
>>>> keep it internal or just save, not restore from it. I know that this
>>>> comment belongs to the nSVM patch but it looks similar to your
>>>> suggestion to protect CR4.VMXE on Intel. If we want to prevent SMM
>>>> from wreaking havoc by trying to enable VMX, shouldn't we use the same
>>>> measures on AMD and prevent it from switching VMCB while the nested
>>>> guest is running. Thanks!
>>>
>>> The difference is that on AMD you're actually allowed to do that.  The
>>> SMM handler can also poke the general purpose register or even the
>>> descriptor cache, so I think we should stay close to what the manual
>>> says---even if it does two opposite things on Intel vs. AMD.
>>
>> The AMD manual seems to list the SVM fields as Read-Only.
>> "Software should not modify offsets specified as read-only or
>> reserved, otherwise unpredictable results can occur."
>
> In practice, that means "otherwise you'll get exactly what you think
> you'll get, with possibly entertaining effects".  However, this is not
> relevant: the point is that on AMD the SMM handler can do VMRUN, while
> on VMX it cannot even do VMXON.

Understood.

> So on AMD the handler can change the active VMCB (via VMRUN, which
> writes to svm->nested.vmcb).  On RSM however you must get back to L2
> using the VMCB stored in the SMM state save area.  This is exactly what
> patch 6 does but, because AMD doesn't have hidden registers outside
> guest mode, I think restoring from the SMM state save area is more in
> line with the spirit of AMD's extensions.

Agreed that it's more in line with the spirit. In general, I would
lean toward being defensive - in that if supporting a state transition
(like changing the current VMCB by a SMM handler) is not mandated by
the CPU spec, I wouldn't add it. Because less attack surface, less
testing, maintenance, etc.

>> Would this convince you to keep patch 6 as is? Saving and restoring
>> from the save state area eliminates a bit of internal state but it
>> adds complexity to other places (need to remember smbase before em_rsm
>> overwrites it, for example). It's also a bit faster to restore from
>> internal than from guest memory and it would match VMX.
>
> What if you instead reached the kvm_x86_ops post_leave_smm callback from
> rsm_load_state_64 (via a new emulator op)?

That's too early for the post_leave_smm callback to run unfortunately.
I'll either split post_leave_smm to two parts, one running within
rsm_load_state_64 and the other at the end of emulation. Or I'll just
keep a bit of state in x86_emulate_ctxt and keep one callback. Thanks!



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux