Re: Memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've narrowed the observed memory leak down to the read-ahead translator. I can apply stat-prefetch and write-behind without triggering the leak in my simple test, but read-ahead will cause memory consumption in the glusterfs process to slowly increase for a little while and then suddenly start increasing very rapidly.

Thanks,

Brent

On Tue, 6 Mar 2007, Brent A Nelson wrote:

I can reproduce the memory leak in the glusterfs process even with just two disks from two nodes unified (it doesn't just occur with mirroring or striping), at least when all performance translators are used except for io-threads (io-threads causes my dd writes to die right away).

I have 2 nodes, with glusterfs unifying one disk from each node. Each node is also a client. I do a dd on each node, simultaneously, with no problem:
node1: dd if=/dev/zero of=/phys/blah0 bs=10M count=1024
node2: dd if=/dev/zero of=/phys/blah1 bs=10M count=1024

When doing a read on each node simultaneously, however, things go along for awhile, but then glusterfs starts consuming more and more memory until it presumably runs out and ultimately dies or becomes useless.

Can anyone else confirm? And has anyone gotten io-threads to work at all?

These systems are running Ubuntu Edgy, with just the generic kernel and Fuse 2.6.3 applied.

Thanks,

Brent


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux