On Wed, Apr 11, 2012 at 6:14 PM, Guido Winkelmann <guido-kvml@xxxxxxxxxxxxxxxxx> wrote: > Hi, > > Nested virtualization on Intel does not work for me with qemu-kvm. As soon as > the third layer OS (second virtualised) is starting the Linux kernel, the > entire second layer freezes up. The last thing I can see console of the third > layer system before it freezes is "Decompressing Linux... ". (no "done", > though). When starting without nofb option, the kernel still manages to set > the screen resolution before freezing. > > Grub/Syslinux still work, but are extremely slow. > > Both the first layer OS (i.e. the one running on bare metal) and the second > layer OS are 64-bit-Fedora 16 with Kernel 3.3.1-3.fc16.x86_64. On both the > first and second layer OS, the kvm_intel modules are loaded with nested=Y > parameter. (I've also tried with nested=N in the second layer. Didn't change > anything.) > Qemu-kvm was originally the Fedora-shipped 0.14, but I have since upgraded to > 1.0. (Using rpmbuild with the specfile and patches from > http://pkgs.fedoraproject.org/gitweb/?p=qemu.git;a=blob;f=qemu.spec;hb=HEAD) > > The second layer machine has this CPU specification in libvirt on the first > layer OS: > > <cpu mode='custom' match='exact'> > <model fallback='allow'>Nehalem</model> > <feature policy='require' name='vmx'/> > </cpu> > > which results in this qemu commandline (from libvirt's logs): > > LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu- > kvm -S -M pc-0.15 -cpu kvm64,+lahf_lm,+popcnt,+sse4.2,+sse4.1,+ssse3,+vmx - > enable-kvm -m 8192 -smp 8,sockets=8,cores=1,threads=1 -name vshost1 -uuid > 192b8c4b-0ded-07aa-2545-d7fef4cd897f -nodefconfig -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/vshost1.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown - > no-acpi -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive > file=/data/vshost1.img,if=none,id=drive-virtio-disk0,format=qcow2 -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio- > disk0,bootindex=1 -drive file=/data/Fedora-16-x86_64- > netinst.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw - > device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev > tap,fd=21,id=hostnet0,vhost=on,vhostfd=22 -device virtio-net- > pci,netdev=hostnet0,id=net0,mac=52:54:00:84:7d:46,bus=pci.0,addr=0x3 -netdev > tap,fd=23,id=hostnet1,vhost=on,vhostfd=24 -device virtio-net- > pci,netdev=hostnet1,id=net1,mac=52:54:00:84:8d:46,bus=pci.0,addr=0x4 -vnc > 127.0.0.1:0,password -k de -vga cirrus -device virtio-balloon- > pci,id=balloon0,bus=pci.0,addr=0x6 > > I have also tried some other combinations for the cpu element, like changing > the model to core2duo and/or including all the features reported by libvirt's > capabalities command. > > The third level machine does not have a cpu element in libvirt, and its > commandline looks like this: > > LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu- > kvm -S -M pc-0.14 -enable-kvm -m 8192 -smp 4,sockets=4,cores=1,threads=1 -name > gentoo -uuid 3cdcc902-4520-df25-92ac-31ca5c707a50 -nodefconfig -nodefaults - > chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/gentoo.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-acpi -drive > file=/data/gentoo.img,if=none,id=drive-virtio-disk0,format=qcow2 -device > virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 - > drive file=/data/install-amd64- > minimal-20120223.iso,if=none,media=cdrom,id=drive- > ide0-1-0,readonly=on,format=raw -device ide- > drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev > tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net- > pci,netdev=hostnet0,id=net0,mac=52:54:00:84:6d:46,bus=pci.0,addr=0x3 -usb -vnc > 127.0.0.1:0,password -k de -vga cirrus -device virtio-balloon- > pci,id=balloon0,bus=pci.0,addr=0x5 > > The third layer OS is a recent Gentoo minimal install (amd64), but somehow I > don't think that matters at this point... > > The metal is a Dell PowerEdge R710 server with two Xeon E5520 CPUs. I've tried > updating the machine's BIOS and other firmware to the latest version. That > took a lot of time and a lot of searching on Dell websites, but didn't change > anything. > > Does anyone have any idea what might be going wrong here or how I could debug > this further? Interesting. I recently(a couple of months) ago with this configuration: ================================================== 1/ Physical Host (Host hypervisor/Bare metal) -- Config: Intel(R) Xeon(R) CPU(4 cores/socket); 10GB Memory; CPU Freq – 2GHz; Running latest Fedora-16(Minimal foot-print, @core only with Virt pkgs;x86_64; kernel-3.1.8-2.fc16.x86_64 2/ Regualr Guest (Or Guest Hypervisor) -- Config: 4GB Memory; 4vCPU; 20GB Raw disk image with cache =’none’ to have decent I/O; Minimal, @core F16; And same virt-packages as Physical Host; x86_64 3/ Nested Guest (Guest installed inside the Regular Guest) -- Config: 2GB Memory; 1vCPU; Minimal(@core only) F16; x86_64 ================================================== Here is my complete notes on nested virtualization w/ Intel -- http://kashyapc.wordpress.com/2012/01/14/nested-virtualization-with-kvm-intel/ My result: I was able to ssh into the nested guest (guest installed inside the regular guest), but, after a reboot, the nested-guest loses the IP rendering it inaccessible.(Info: the regular-guest has a bridged IP, and nested-guest has a NATed IP) Refer the comments in the above post for some more discussion. Though I haven't tried the suggestion of 'updating your system firmware and disabling VT for Direct I/O Access if you are able in the firmware' . And I wonder how does turning it off can alleviate the prob. And my AMD notes is here(which was completely successful) -- http://kashyapc.wordpress.com/2012/01/18/nested-virtualization-with-kvm-and-amd/ Thanks, Kashyap > > Guido > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html