Re: Monotonically increasing memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, even I saw the following leaks, when I tested it a week back. These were the leaks:
You should probably take a statedump and see what datatypes are leaking.

root@localhost - /usr/local/var/run/gluster
14:10:26 ⚡ awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
[mount/fuse.fuse - usage-type gf_common_mt_char memusage]
size=341240
num_allocs=23602
max_size=347987
max_num_allocs=23604
total_allocs=653194

[mount/fuse.fuse - usage-type gf_common_mt_mem_pool memusage]
size=4335440
num_allocs=45159
max_size=7509032
max_num_allocs=77391
total_allocs=530058

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf memusage]
size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char memusage]
size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool memusage]
size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_asprintf memusage]
size=182526
num_allocs=30421
max_size=182526
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_char memusage]
size=547578
num_allocs=30421
max_size=547578
max_num_allocs=30421
total_allocs=30421

[performance/quick-read.r2-quick-read - usage-type gf_common_mt_mem_pool memusage]
size=3117196
num_allocs=52999
max_size=3117368
max_num_allocs=53000
total_allocs=109484

[cluster/distribute.r2-dht - usage-type gf_common_mt_asprintf memusage]
size=257304
num_allocs=82988
max_size=257304
max_num_allocs=82988
total_allocs=97309

[cluster/distribute.r2-dht - usage-type gf_common_mt_char memusage]
size=2082904
num_allocs=82985
max_size=2082904
max_num_allocs=82985
total_allocs=101346

[cluster/distribute.r2-dht - usage-type gf_common_mt_mem_pool memusage]
size=9958372
num_allocs=165972
max_size=9963396
max_num_allocs=165980
total_allocs=467956


root@localhost - /usr/local/var/run/gluster
14:10:28

Pranith

On 08/01/2014 12:01 AM, Anders Blomdell wrote:
During rsync of 350000 files, memory consumption of glusterfs 
rose to 12 GB (after approx 14 hours), I take it that this is a 
bug I should try to track down? 

Version is 3.7dev as of tuesday...

/Anders


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux