Re: Memory usage behavior for nested directories

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the truth is that we performed a few tests also with 3.7 series e.g
create/delete/append files/directories.
the bottom line is that we observed that each time RSS was increasing
while operations were ongoing
(which is kindof normal I guess) but when files/dir etc were deleted
RSS did not dropped (even after drop_caches).
But if the tests were repeated there was no additional RSS allocation.
I can understood that this could come from caching but
RSS reached even near ~800MB while as far as I understood initial pool
sizes are much less. Please see the graphs that Olia attached.

from the dumps after the dir deletion
https://bugzilla.redhat.com/attachment.cgi?id=1162159

I see that cur-stdalloc is zero for the pools (which as far as I
understood represent the extra heap allocation)
also other hot-counts
pool-name=glusterfs:dict_t
hot-count=3850

pool-name=glusterfs:data_t
hot-count=3869

but (as far as I understood) hot counts comes from the initial heap
not extra mallocs.
I am not sure it these denote memory leak, maybe we are missing
somenting in the glusterfs design.


thank you very much
br, vangelis

On Thu, May 26, 2016 at 6:23 PM, Joe Julian <joe@xxxxxxxxxxxxxxxx> wrote:
> Speaking as a user, not a dev, there have been a lot of memory leak fixes in
> 3.7. I have no idea how many of those were backported to 3.6. It might be
> worth testing 3.7.11 to compare.
>
>
> On 05/26/2016 08:02 AM, Kremmyda, Olympia (Nokia - GR/Athens) wrote:
>
> Hi,
>
> We use Gluster 3.6.9 in one replicated volume (named “log”), with two
> bricks.
> Our tests include Nested Directory Creation operations (Creation from 1000
> up to 250000 Directory Trees) with 396 depth and no deletion is performed.
>
> We have observed the following memory usage statistics shown in the images:
>         https://bugzilla.redhat.com/attachment.cgi?id=1162032
> https://bugzilla.redhat.com/attachment.cgi?id=1162033
> https://bugzilla.redhat.com/attachment.cgi?id=1162034
> (statedumps are in https://bugzilla.redhat.com/attachment.cgi?id=1162035 )
>
> and we would like your opinion if this memory usage is normal for glusterfs.
> Also after our tests, we delete these directories and the memory is not
> released.
> Can you describe us the expected memory behaviour in these cases?
>
>
> Thank you,
> Olia
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux