Re: Gfapi memleaks, all versions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Apologies for delay in response, it took me a while to switch here.

As someone pointed rightly in the discussion above. The start and stop
of a VM via libvirt (virsh) will at least call 2
glfs_new/glfs_init/glfs_fini calls.
In fact there are 3 calls involved 2 (mostly for stat, read headers
and chown) in libvirt context and 1 (actual read write IO) in qemu
context, since qemu is forked out and executed in its own process
memory context that will not incur a leak in libvirt, also on stop of
VM the qemu process dies.
Not that all, In case if we are using 4 extra attached disks, then the
total calls to glfs_* will be (4+1)*2 in libvirt and (4+1)*1 in qemu
space i.e 15.

What's been done so far in QEMU,
I have submitted a patch to qemu to cache the glfs object, Hence there
will be one glfs object per volume, now the glfs_* calls will be
reduced from N (In above case 4+1=5)  to 1 per volume.
This will optimize the performance by reducing number of calls, reduce
the memory consumption (as each instance occupies ~300MB VSZ) and
reduce the leak ( ~ 7 - 10 MB per call)
Note this patch is in master [1] already.

What about Libvirt then ?
Almost same here, I am planning to cache the connections (the glfs
object) until all the disks are initialized then finally followed by a
glfs_fini()
There by we reduce N * 2 (From above case its (4+1)*2 = 10) calls to
1, Work on this change is in progress, can expect this by end of the
week mostly.


[1] https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg07087.html


--
Prasanna



On Thu, Oct 27, 2016 at 12:23 PM, Pranith Kumar Karampuri
<pkarampu@xxxxxxxxxx> wrote:
> +Prasanna
>
> Prasanna changed qemu code to reuse the glfs object for adding multiple
> disks from same volume using refcounting. So the memory usage went down from
> 2GB to 200MB in the case he targetted. Wondering if the same can be done for
> this case too.
>
> Prasanna could you let us know if we can use refcounting even in this case.
>
>
> On Wed, Sep 7, 2016 at 10:28 AM, Oleksandr Natalenko
> <oleksandr@xxxxxxxxxxxxxx> wrote:
>>
>> Correct.
>>
>> On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri
>> <pkarampu@xxxxxxxxxx> wrote:
>> >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>> >oleksandr@xxxxxxxxxxxxxx> wrote:
>> >
>> >> Hello,
>> >>
>> >> thanks, but that is not what I want. I have no issues debugging gfapi
>> >apps,
>> >> but have an issue with GlusterFS FUSE client not being handled
>> >properly by
>> >> Massif tool.
>> >>
>> >> Valgrind+Massif does not handle all forked children properly, and I
>> >believe
>> >> that happens because of some memory corruption in GlusterFS FUSE
>> >client.
>> >>
>> >
>> >Is this the same libc issue that we debugged and provided with the
>> >option
>> >to avoid it?
>> >
>> >
>> >>
>> >> Regards,
>> >>   Oleksandr
>> >>
>> >> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu929@xxxxxxxx wrote:
>> >> >  Hello,  Oleksandr
>> >> >     You can compile that simple test code posted
>> >> > here(http://www.gluster.org/pipermail/gluster-users/2016-
>> >> August/028183.html
>> >> > ). Then, run the command
>> >> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
>> >> > --tool=massif  ./glfsxmp the cmd will produce a file like
>> >> massif.out.xxxx,
>> >> >  the file is the memory leak log file , you can use ms_print tool
>> >as
>> >> below
>> >> > command $>ms_print  massif.out.xxxx
>> >> > the cmd will output the memory alloc detail.
>> >> >
>> >> > the simple test code just call glfs_init and glfs_fini 100 times to
>> >found
>> >> > the memory leak,  by my test, all xlator init and fini is the main
>> >memory
>> >> > leak function. If you can locate the simple code memory leak code,
>> >maybe,
>> >> > you can locate the leak code in fuse client.
>> >> >
>> >> > please enjoy.
>> >>
>> >>
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> >>
>>
>
>
>
> --
> Pranith
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux