Re: Memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With no performance translators at all, I still get what appears to be memory leakage, but this time when doing metadata-heavy tasks. With copies going of /usr to the GlusterFS, glusterfs slowly increases memory consumption and glusterfsd consumes memory at a much more rapid pace (but both are much slower than the glusterfs leakage reported earlier):

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3047 root      16   0  166m 165m  712 S   42  4.2  11:51.19 glusterfsd
 3068 root      15   0 25344  23m  748 S   36  0.6  11:23.24 glusterfs

Heavy data operations don't cause noticeable increases in memory consumption of either process in this setup.

Thanks,

Brent

On Tue, 6 Mar 2007, Brent A Nelson wrote:

I've narrowed the observed memory leak down to the read-ahead translator. I can apply stat-prefetch and write-behind without triggering the leak in my simple test, but read-ahead will cause memory consumption in the glusterfs process to slowly increase for a little while and then suddenly start increasing very rapidly.

Thanks,

Brent

On Tue, 6 Mar 2007, Brent A Nelson wrote:

I can reproduce the memory leak in the glusterfs process even with just two disks from two nodes unified (it doesn't just occur with mirroring or striping), at least when all performance translators are used except for io-threads (io-threads causes my dd writes to die right away).

I have 2 nodes, with glusterfs unifying one disk from each node. Each node is also a client. I do a dd on each node, simultaneously, with no problem:
node1: dd if=/dev/zero of=/phys/blah0 bs=10M count=1024
node2: dd if=/dev/zero of=/phys/blah1 bs=10M count=1024

When doing a read on each node simultaneously, however, things go along for awhile, but then glusterfs starts consuming more and more memory until it presumably runs out and ultimately dies or becomes useless.

Can anyone else confirm? And has anyone gotten io-threads to work at all?

These systems are running Ubuntu Edgy, with just the generic kernel and Fuse 2.6.3 applied.

Thanks,

Brent


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux