On 21.04.2010, at 00:29, Leszek Urbanski wrote: > Hi, > > this is a follow-up to bug 2989366: > > https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2989366&group_id=180599 > > after extensive debugging with the guys on #kvm it turns out that the leak is > in the qemu-kvm userland process, in virtio-blk. > > A summary of my setup is described in the bug report above. > > The affected guests have a common load profile: frequent sequential I/O on > large (~ 2 GB) files. > > I tried switching off or changing almost all options in my qemu command > line and the only option that makes a difference is -drive if=virtio. > > When an affected guest is run with virtio drives the qemu-kvm process starts > leaking immediately after startup and grows (for the most heavily leaking > guests) by ~ 1 GB RSS for every ten hours (and keeps growing until OOM). > > With -drive if=ide or scsi, it doesn't leak at all. > > A diff of /proc/<pid>/maps of an affected qemu-kvm at startup and after > 1.5 hrs: > > -039b9000-5ccd0000 rw-p 00000000 00:00 0 > +039b9000-65803000 rw-p 00000000 00:00 0 > > (a heap leak?) > > I'm willing to debug further. The problem is 100% reproducible. Certainly something malloc()'ed. It'd be great to send this through valgrind. Thanks to KVM the guest still runs natively, so the slowdown isn't _that_ bug through it. I'm also not a valgrind expert, but IIRC there was a separate memory allocation module. Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html