Re: libgfapi libvirt memory leak version 3.7.8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

W dniu 2016-02-12 o 07:04, Soumya Koduri pisze:
Hi Piotr,

Could you apply below gfAPI patch and check the valgrind output -

    http://review.gluster.org/13125

tried both patches, on client and my 2 bricks. Even recompiled qemu. No change - stil leaks (Although few bytes less).

running valgrind on:

qemu-img info gluster://SERVER_IP:0/pool/FILE.img

==4549== LEAK SUMMARY:
==4549==    definitely lost: 19,441 bytes in 96 blocks
==4549==    indirectly lost: 2,478,511 bytes in 177 blocks
==4549==      possibly lost: 240,600 bytes in 7 blocks
==4549==    still reachable: 3,271,130 bytes in 2,931 blocks
==4549==         suppressed: 0 bytes in 0 blocks

valgrind full log:
http://195.191.233.1/qemu-img.log
http://195.191.233.1/qemu-img.log.bz2

Best regards
Piotr Rybicki


On 02/11/2016 09:40 PM, Piotr Rybicki wrote:
Hi All

I have to report, that there is a mem leak latest version of gluster

gluster: 3.7.8
libvirt 1.3.1

mem leak exists when starting domain (virsh start DOMAIN) which acesses
drivie via libgfapi (although leak is much smaller than with gluster
3.5.X).

I believe libvirt itself uses libgfapi only to check existence of a disk
(via libgfapi). Libvirt calls glfs_ini and glfs_fini when doing this
check.

When using drive via file (gluster fuse mount), there is no mem leak
when starting domain.

my drive definition (libgfapi):

     <disk type='network' device='disk'>
       <driver name='qemu' type='raw' cache='writethrough' iothread='1'/>
       <source protocol='gluster' name='pool/disk-sys.img'>
         <host name='X.X.X.X' transport='rdma'/> # connection is still
via tcp. Defining 'tcp' here doesn't make any difference.
       </source>
       <blockio logical_block_size='512' physical_block_size='32768'/>
       <target dev='vda' bus='virtio'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
     </disk>

I've at first reported to libvirt developers, but they blame gluster.

valgrind details (libgfapi):

# valgrind --leak-check=full --show-reachable=yes
--child-silent-after-fork=yes libvirtd --listen 2> libvirt-gfapi.log

On the other console:
virsh start DOMAIN
...wait...
virsh shutdown DOMAIN
...wait and stop valgrind/libvirtd

valgrind log:

==5767== LEAK SUMMARY:
==5767==    definitely lost: 19,666 bytes in 96 blocks
==5767==    indirectly lost: 21,194 bytes in 123 blocks
==5767==      possibly lost: 2,699,140 bytes in 68 blocks
==5767==    still reachable: 986,951 bytes in 15,038 blocks
==5767==         suppressed: 0 bytes in 0 blocks
==5767==
==5767== For counts of detected and suppressed errors, rerun with: -v
==5767== ERROR SUMMARY: 96 errors from 96 contexts (suppressed: 0 from 0)

full log:
http://195.191.233.1/libvirt-gfapi.log
http://195.191.233.1/libvirt-gfapi.log.bz2

Best regards
Piotr Rybicki
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux