Hi, I'm contributor on proxmox2 distrib, we use qemu-kvm last git version and users reports some guest hang, at udev start at virtio devices initialization. http://forum.proxmox.com/threads/9057-virtio-net-crashing-after-upgrade-to-proxmox-2-0 (screenshots are available in the forum thread) if we have - 5 or more virtio disks - 4virtios disk and 1 or more virtios nics. working guests -------------- - guests with 2.6.32 kernel like debian squeeze boot fine - debian wheezy with squeeze 2.6.32 kernel boot fine. non working guests ------------------- gentoo with - 3.0.17 - 3.1.6 - 3.2.1 - 3.2.12 kernels are hanging at udev start. - centos6.2 with 2.6.32+backports patchs kernel is hanging also. - debian wheezy with 3.2 kernel. same guests/kernel with qemu-kvm 0.15 are booting fine. So I can't tell if it's a kernel problem or qemu-kvm problem. command line sample: /usr/bin/kvm -id 100 -chardev socket,id=monitor,path=/var/run/qemu-server/100.mon,server,nowait -mon chardev=monitor,mode=readline -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -usbdevice tablet -name centos-6.2 -smp sockets=1,cores=4 -nodefaults -boot menu=on -vga cirrus -localtime -k en-us -drive file=/dev/disk5/vm-100-disk-1,if=none,id=drive-virtio3,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio3,id=virtio3,bus=pci.0,addr=0xd -drive file=/dev/disk3/vm-100-disk-1,if=none,id=drive-virtio1,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/dev/disk2/vm-100-disk-1,if=none,id=drive-virtio0,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 2 -drive file=/dev/disk4/vm-100-disk-1,if=none,id=drive-virtio2,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc -m 8192 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on -device virtio-net-pci,mac=6A:A3:E9:EA:51:17,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300 I had try with/without vhost and changing pci addr, same results. tests made ----------- - 3 virtio disks + 1 virtio-net = OK - 3 virtio disks + 2 virtio-net = OK - 3 virtio disks + 1 scsi (lsi) disk + 1 virtio-net = OK - 3 virtio disks + 1 scsi (lsi) disk + 2 virtio-net = OK - 4 virtio disks + 1 virtio-net = NOK (hang at net init on the virtio-net) - 4 virtio disks + 1 e1000 = OK - 4 virtio disks + 1 e1000 + 1 virtio-net = NOK (hang at net init on the virtio-net) - 5 virtio disks + 1 e1000 = NOK (udevadm settle timeout on disk N°5 which become unusable) - 5 virtio disks + 2 virtio-net = NOK (udevadm settle timeout on disk N°5 + hang on the virtio-net) - 5 virtio disks + 3 virtio-net = NOK (udev settle timeout on disk N°5 + hang on the first virtio-net) Can someone reproduce the problem ? Best Regards, Alexandre Derumier -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html