Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 07, 2018 at 11:26:14PM +0100, David Hildenbrand wrote:
> On 07.02.2018 16:31, Kashyap Chamarthy wrote:

[...]

> Sounds like a similar problem as in
> https://bugzilla.kernel.org/show_bug.cgi?id=198621
> 
> In short: there is no (live) migration support for nested VMX yet. So as
> soon as your guest is using VMX itself ("nVMX"), this is not expected to
> work.

Actually, live migration with nVMX _does_ work insofar as you have
_identical_ CPUs on both source and destination — i.e. use the QEMU
'-cpu host' for the L1 guests.  At least that's been the case in my
experience.  FWIW, I frequently use that setup in my test environments.

Just to be quadruple sure, I did the test: Migrate an L2 guest (with
non-shared storage), and it worked just fine.  (No 'oops'es, no stack
traces, no "kernel BUG" in `dmesg` or serial consoles on L1s.  And I can
login to the L2 guest on the destination L1 just fine.)

Once you have the password-less SSH between source and destination, and
a bit of libvirt config setup.  I ran the migrate command as following:

    $ virsh migrate --verbose --copy-storage-all \
        --live cvm1 qemu+tcp://root@f26-vm2/system
    Migration: [100 %]
    $ echo $?
    0

Full details:
https://kashyapc.fedorapeople.org/virt/Migrate-a-nested-guest-08Feb2018.txt

(At the end of the document above, I also posted the libvirt config and
the version details across L0, L1 and L2.  So this is a fully repeatable
test.)


-- 
/kashyap

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users




[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux