3888 1 root 0 52.5 6.2g 7790m 1.5g 1056 8274 20 0 - S 6 12:07 glusterfs
Crazy. As soon as the guy I need to speak to to set up some test jobs comes in today, I'm going to follow the directions posted previously to figure out if its a leak or something else.
Dan
On Tue, Feb 24, 2009 at 8:58 AM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
Dan Parsons wrote:It isn't a permanent setting, it's just a real-time instruction to drop all current caches.
I will do this today. I noticed that I already have vm.drop_caches set to 3 via sysctl.conf, based on a suggestion from you from long ago. Should I delete this under normal usage? Is it possible that this setting, enabled by default, is causing my problems?
In the current context, I think the whole idea is bogus anyway because glusterfs process still maintains it's current resident size at hundreds of MB when I flush the caches (with no performance translators), so I don't think this affects the leak in any way.
On my setup I have / mounted on glusterfs, /tmp on ext3 and /usr/src on NFS, so the fact that glusterfs root daemon bloating can only be caused either by access to shared libraries or invocation of executables, since compiling a big code tree (including when it doesn't reside on glusterfs) seems to trigger the leak in a pretty major way.
I'll try to re-create the problem with a chroot environment, since debuging the rootfs daemon is extremely difficult.
Gordan
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel