Re: Nested VMX security review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am working on nVMX state save/restore, but I don't have the
resources to do the same for nSVM state.

As mentioned in an earlier thread, migration would be a lot easier if
we eliminated the vmcs01/vmcs02 distinction and L0 just used a single
VMCS for both. I do have a workaround, but it's a little ugly. I'd
love to discuss this with interested parties at kvm forum next week.



On Mon, Aug 15, 2016 at 11:20 PM, Jan Kiszka <jan.kiszka@xxxxxxxxxxx> wrote:
> On 2016-08-16 02:23, Lars Bull wrote:
>> Hi, all. I work on the virtualization security team at Google. We'd
>> like to start discussion on having nested VMX be considered ready for
>> production, in order to have it be enabled by default, its CVEs
>> embargoed, and its patches sent to stable. It's my understanding that
>> the lack of a security audit is one of the biggest reasons why nested
>> VMX is still experimental. We've done the following work on this
>> front:
>>
>> - Andy Honig conducted a security review of the nested VMX code base.
>> He found time-of-check time-of-use issues, which have been addressed
>> with a VMCS caching change
>> (https://git.kernel.org/cgit/virt/kvm/kvm.git/commit/?id=4f2777bc97974b0df9276ee9a85155a9e27a5282).
>> He also found an issue that could allow the guest to access the host
>> x2APIC, for which a fix is pending
>> (https://www.mail-archive.com/linux-kernel@xxxxxxxxxxxxxxx/msg1204751.html).
>>
>> - I worked on fuzz testing the code. We have a suite of KVM fuzz tests
>> that we normally test with, covering surfaces such as IO, MMIO, MSRs,
>> and the instruction emulator. I ran this in a nested environment using
>> KVM on KVM and didn't encounter any serious problems on the host. I
>> also modified the L1 kernel to tweak bits in the VMCS control fields
>> for the L2 guest when handling exits, while the L2 guest was running
>> our standard fuzzing suite. This was able to find host memory
>> corruption with shadow VMCS enabled, which has now been fixed
>> (https://git.kernel.org/cgit/virt/kvm/kvm.git/commit/?id=2f1fe81123f59271bddda673b60116bde9660385).
>>
>> Our testing was focused primarily on security of the host from both
>> guest levels rather than the security of L1 and did not check for
>> correctness. We are fairly confident after this work that nested VMX
>> doesn't present a significant increase in risk for the host. We're
>> curious what the next steps should be in getting this considered
>> production-ready.
>
> Great job! Thanks a lot for moving this topic to the next level.
>
> I suppose the other remaining topic is saving/restoring nVMX related
> states so that migration and a cleaner reset become possible, but also
> other inspection from userspace (gdb & Co.). I guess this will also ease
> further fuzz testing. But I don't know if that has to delay flipping the
> default of kvm_intel.nested.
>
> Jan
>
> --
> Siemens AG, Corporate Technology, CT RDA ITP SES-DE
> Corporate Competence Center Embedded Linux
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux