Hi All
I have to report, that there is a mem leak latest version of gluster
gluster: 3.7.8
libvirt 1.3.1
mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster 3.5.X).
I believe libvirt itself uses libgfapi only to check existence of a disk
(via libgfapi). Libvirt calls glfs_ini and glfs_fini when doing this check.
When using drive via file (gluster fuse mount), there is no mem leak
when starting domain.
my drive definition (libgfapi):
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writethrough' iothread='1'/>
<source protocol='gluster' name='pool/disk-sys.img'>
<host name='X.X.X.X' transport='rdma'/> # connection is still
via tcp. Defining 'tcp' here doesn't make any difference.
</source>
<blockio logical_block_size='512' physical_block_size='32768'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
I've at first reported to libvirt developers, but they blame gluster.
valgrind details (libgfapi):
# valgrind --leak-check=full --show-reachable=yes
--child-silent-after-fork=yes libvirtd --listen 2> libvirt-gfapi.log
On the other console:
virsh start DOMAIN
...wait...
virsh shutdown DOMAIN
...wait and stop valgrind/libvirtd
valgrind log:
==5767== LEAK SUMMARY:
==5767== definitely lost: 19,666 bytes in 96 blocks
==5767== indirectly lost: 21,194 bytes in 123 blocks
==5767== possibly lost: 2,699,140 bytes in 68 blocks
==5767== still reachable: 986,951 bytes in 15,038 blocks
==5767== suppressed: 0 bytes in 0 blocks
==5767==
==5767== For counts of detected and suppressed errors, rerun with: -v
==5767== ERROR SUMMARY: 96 errors from 96 contexts (suppressed: 0 from 0)
full log:
http://195.191.233.1/libvirt-gfapi.log
http://195.191.233.1/libvirt-gfapi.log.bz2
Best regards
Piotr Rybicki
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel