On Mon, Jul 30, 2012 at 08:08:31PM +0200, Bernd Schubert wrote: > On 07/30/2012 07:33 PM, Bernd Schubert wrote: > >Hello Stefan, > > > >Stefan Hajnoczi <stefanha <at> gmail.com> writes: > >> > >>On Wed, Jan 11, 2012 at 4:18 PM, Bernd Schubert > >><bernd.schubert <at> itwm.fraunhofer.de> wrote: > >>>On 01/11/2012 05:04 PM, Stefan Hajnoczi wrote: > >>>>Try pinging the host's IP address from inside the guest. Run tcpdump > >>>>on the guest's tap interface from the host and observe whether or not > >>>>you see any packets being sent from the guest. > >>> > > > > > >sorry for my terribly late reply. As usual I got distracted by too many other > >things and then returned the hardware I was running the VMs on. My new desktop > >system is better suitable to run kvm and I can easily reproduce it now with 3.5 > >on host and guest side. So its not fixed in recent versions yet. > > > > > >>> > >>>Seems arp requests are still going out, but then don't go in: > >>> > >>>17:16:21.202547 ARP, Reply 192.168.123.1 is-at 00:25:90:38:09:cd (oui > >>>Unknown), length 28 > >>>17:16:21.538724 ARP, Request who-has squeeze1 tell squeeze3, length 28 > >>>17:16:21.539026 ARP, Reply squeeze1 is-at 52:54:00:12:34:11 (oui Unknown), > >>>length 28 > >>>17:16:22.200912 ARP, Request who-has 192.168.123.1 tell squeeze3, length 28 > >> > >>Okay, so it seems networking from the tap device and beyond is fine. > >> > >>>>rmmod virtio_net inside the guest and then modprobe virtio_net again. > >>>>See if network connectivity is restored (remember to rerun DHCP or > >>>>whatever, if necessary). > >>> > >>> > >>>Yep, that makes it work again. But probably is not the real solution ;) > >> > >>It's just another piece of information which helps debug this :). At > >>least nothing has wedged itself into an unrecoverable state. > >> > >>When you said the problem happens without vhost, did you explicitly > >>run vhost=off? Or did you just omit "vhost=on"? > > > >It was definitely off and I can confirm that it also locks up with vhost=on and > >vhost=off with 3.5. > > > >> > >>This sounds like a guest kernel/driver issue. I recommend testing > >>git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git in > >>the guest to see if this has already been fixed. > >> > >>If you have the -dbg RPMs installed it may be possible to insert a > >>probe into the virtio_net kernel module and observe receive > >>interrupts. This does require the right kernel CONFIG_ but you might > >>already have it enabled: > >> > >>$ sudo perf probe --add skb_recv_done > >>$ sudo perf record -e probe:skb_recv_done -a > >>...send some packets to the guest... > >>^C > >>$ sudo perf script > >> > >>If you see no skb_recv_done events then the guest driver is not > >>receiving a notification when packets are received. > >> > >>You can find more about how to use perf-probe(1) at > >>http://blog.vmsplice.net/2011/03/how-to-use-perf-probe.html. > > > >Ah nice, I would have used systemtap, but always wanted to check how to do it > >with perf :) > > > >So once the virtio NIC has locked up, I don't get any events from it anymore - > >until I remove/re-insert the virtio module (including ifup/ifdown). I will try > >to find some time later on this week to look into it again. > >Any further ideas how to proceed (I haven't even checked yet how virtio works at > >all...). > > > I took a quick glance where skb_recv_done is registered at all and > traced it back to vp_find_vqs(). Looking into that function I > noticed MSI and so tried to boot with pci=nomsi. And indeed I > guessed it right, with pci=nomsi I don't get any lockups anymore. > Am I the only one booting kvm-qemu usually with enabled MSI? > > Cheers, > Bernd No :) I am guessing it has to do with OOM handling in the guest - it is tested very little but maybe your guest is such that atomic pool gets exhausted for some reason. Could you pls check whether refill_work runs by tracing it? This is our OOM handler. -- MST -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html