Re: [libvirt-users] Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/02/2018 14:57, Florian Haas wrote:
>>>     <feature policy='disable' name='vme'/>
>>>     <feature policy='disable' name='ss'/>
>>>     <feature policy='disable' name='f16c'/>
>>>     <feature policy='disable' name='rdrand'/>
>>>     <feature policy='disable' name='hypervisor'/>
>>>     <feature policy='disable' name='arat'/>
>>>     <feature policy='disable' name='tsc_adjust'/>
>>>     <feature policy='disable' name='xsaveopt'/>
>>>     <feature policy='disable' name='abm'/>
>>>     <feature policy='disable' name='aes'/>
>>>     <feature policy='disable' name='invpcid'/>
>>> </cpu>
>> Maybe one of these features is the root cause of the "messed up" state
>> in KVM. So disabling it also makes the L1 state "less broken".
> 
> Would you try a guess as to which of the above features is a likely culprit?

You're just being lucky. :)

In fact, if you every migrate or save a VM that's running in L2, you
would get an unholy mixture of source L1 and source L2 state running on
the destination *as L1* (because the destination doesn't know it's
running a nested guest!).  It just cannot work yet---sorry about that!

Paolo



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux