Re: Found mem leak in libvirtd, need help to debug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





I still think these are libgfapi leaks; All the definitely lost bytes
come from the library.

==6532== 3,064 (96 direct, 2,968 indirect) bytes in 1 blocks are
definitely lost in loss record 1,106 of 1,142
==6532==    at 0x4C2C0D0: calloc (vg_replace_malloc.c:711)
==6532==    by 0x10701279: __gf_calloc (mem-pool.c:117)
==6532==    by 0x106CC541: xlator_dynload (xlator.c:259)
==6532==    by 0xFC4E947: create_master (glfs.c:202)
==6532==    by 0xFC4E947: glfs_init_common (glfs.c:863)
==6532==    by 0xFC4EB50: glfs_init@@GFAPI_3.4.0 (glfs.c:916)
==6532==    by 0xF7E4A33: virStorageFileBackendGlusterInit
(storage_backend_gluster.c:625)
==6532==    by 0xF7D56DE: virStorageFileInitAs (storage_driver.c:2788)
==6532==    by 0xF7D5E39: virStorageFileGetMetadataRecurse
(storage_driver.c:3048)
==6532==    by 0xF7D6295: virStorageFileGetMetadata
(storage_driver.c:3171)
==6532==    by 0x1126A2B0: qemuDomainDetermineDiskChain
(qemu_domain.c:3179)
==6532==    by 0x11269AE6: qemuDomainCheckDiskPresence
(qemu_domain.c:2998)
==6532==    by 0x11292055: qemuProcessLaunch (qemu_process.c:4708)

Care to reporting it to them?

Of course - i will.

But, are You sure there is no need to call glfs_fini() after qemu
process is launched? Are all of those resources still needed in libvirt?

I understand, that libvirt needs to check presence / other-things of
storage, but after qemu is launched?

We call glfs_fini(). And that's the problem. It does not free everything
that glfs_init() allocated. Hence the leaks. Actually every time we call
glfs_init() we print a debug message from
virStorageFileBackendGlusterInit() which wraps it. And then another
debug message from virStorageFileBackendGlusterDeinit() when we call
glfs_fini(). So if you set up debug logs, you can check whether our init
and finish calls match.

Thanks Michal, You are right.

Leak still exists in newest gluster 3.7.8

There is even simpler case to see this memleak. valgrind on:

qemu-img info gluster://SERVER_IP:0/pool/FILE.img

==6100== LEAK SUMMARY:
==6100==    definitely lost: 19,846 bytes in 98 blocks
==6100==    indirectly lost: 2,479,205 bytes in 182 blocks
==6100==      possibly lost: 240,600 bytes in 7 blocks
==6100==    still reachable: 3,271,130 bytes in 2,931 blocks
==6100==         suppressed: 0 bytes in 0 blocks

So it's definitely gluster fault.

I've just reported it on gluster-devel@

Best regards
Piotr Rybicki

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]