Re: virNetClientPtr leak in remote driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 01, 2011 at 06:35:21PM +0200, Matthias Bolte wrote:
> 2011/8/1 Eric Blake <eblake@xxxxxxxxxx>:
> > On 07/28/2011 12:07 PM, Matthias Bolte wrote:
> >> 2011/7/27 Matthias Bolte <matthias.bolte@xxxxxxxxxxxxxx>:
> >>> doRemoteClose doesn't free the virNetClientPtr and this creates a
> >>> 260kb leak per remote connection. This happens because
> >>> virNetClientFree doesn't remove the last ref, because virNetClientNew
> >>> creates the virNetClientPtr with a refcount of 2.
> >>>
> >
> >> The memory leak I saw was due to virsh calling
> >> virEventRegisterDefaultImpl, but not calling virEventRunDefaultImpl.
> >> Because the event loop is initialized, the call to
> >> virNetSocketAddIOCallback in virNetClientNew succeeds. But virsh not
> >> driving the event loop results in not removing callbacks that were
> >> marked for deletion. Finally this leaks the virNetClientPtr using in
> >> the remote driver. I used a virsh -c qemu:///system to test.
> >>
> >> I was able fix this by calling virEventRunDefaultImpl after
> >> virConnectClose in virsh. But I don't think that this is the correct
> >> general solution.
> >
> > Where do we stand with 0.9.4?  Is this something where we need the
> > general fix before the release, or is just the virsh hack to call
> > virEventRunDefaultImpl good enough for the release, or is this problem
> > not severe enough and we can release as-is?
> 
> virsh is a bit special here as it registers/initializes the default
> even loop but doesn't drive it properly. It only drives it for the
> console command. That's the problem in virsh. This can be avoided by
> calling virEventRunDefaultImpl after virConnectClose iff it's a remote
> connection, in other cases virEventRunDefaultImpl might block.
> Therefore, we shouldn't be calling it in general after a
> virConnectClose in virsh.
> 
> If we assume that an application that registers an event loop will
> drive it properly then this problem is not critical, as it doesn't
> exist. Just virsh is a special case that leaks 260kb per closed remote
> connection. When we assume that a typical virsh user uses a single
> connection per virsh invocation then we can view this as a static
> leak.

Yeah, this is a case I never thought of. The "easy" fix is to just
make virsh spawn a new thread to run the event loop in the background.
The "hard" fix is to make virsh I/O entirely event driven, so that
we don't just sit in a blocking read of stdin waiting for input,
but instead use the event loop to process stdin.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]