Re: inode list memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/26/2010 03:58 AM, Mike Terzo wrote:
On Thu, Feb 25, 2010 at 1:49 PM, Harshavardhana<harsha@xxxxxxxxxxx>  wrote:
On Thu, Feb 25, 2010 at 11:46 AM, Shehjar Tikoo<shehjart@xxxxxxxxxxx>
wrote:
Mike Terzo wrote:
I have a very simple glusterfs config setup. I have 2 tcp hosts
configured as:

volume LABVOL
        type cluster/replicate
        option block-size 4MB
        subvolumes gfs1 gfs2
end-volume

Could you share with us your entire volume files? and the version of
glusterfs under use?.
There is no such option for "cluster/replicate" called "option block-size
4MB"
That's my fault, that's left over from when I was using stripe.

Here's my config:

volume gfs1
     type protocol/client
     option transport-type tcp
     option transport.socket.nodelay on
     option remote-host gluster1
     option remote-subvolume brick
end-volume

volume gfs2
     type protocol/client
     option transport-type tcp
     option transport.socket.nodelay on
     option remote-host gluster2
     option remote-subvolume brick
end-volume

volume LABVOL
     type cluster/replicate
     option block-size 4MB
     subvolumes gfs1 gfs2
end-volume

volume readahead
   type performance/read-ahead
   option page-count 8           # 2 is default option
   option force-atime-update off # default is off
   subvolumes LABVOL
end-volume

volume writebehind
     type performance/write-behind
     option window-size 4MB
     subvolumes readahead
end-volume

volume threads
     type performance/io-threads
     option thread-count 3
     subvolumes writebehind
end-volume

volume cache
     type performance/io-cache
     option cache-size 16MB
     subvolumes threads
end-volume


I've tried echo 2>  /proc/sys/vm/drop_caches and that still hasn't
worked along with calling sync, and found some mention of echo 3 into
drop_caches as well.

None seem to help.

I added -DDEBUG to my compile, i'm still trying to understand
everything in that file.

Here's what valgrind is telling me is using all the memory:

==17821== 16,967,936 bytes in 132,562 blocks are possibly lost in loss
record 66 of 67
==17821==    at 0x4A1AD7D: calloc (vg_replace_malloc.c:279)
==17821==    by 0x4B47EEC: __inode_create (inode.c:460)
==17821==    by 0x4B480F7: inode_new (inode.c:500)
==17821==    by 0x608E9BE: fuse_lookup (fuse-bridge.c:596)
==17821==    by 0x609BE8F: fuse_thread_proc (fuse-bridge.c:3182)
==17821==    by 0x4D820F9: start_thread (in /lib/libpthread-2.3.6.so)
==17821==    by 0x4F58CE1: clone (in /lib/libc-2.3.6.so)
==17821==
==17821==
==17821== 31,815,600 bytes in 132,565 blocks are still reachable in
loss record 67 of 67
==17821==    at 0x4A1AD7D: calloc (vg_replace_malloc.c:279)
==17821==    by 0x4B48013: __inode_create (inode.c:475)
==17821==    by 0x4B480F7: inode_new (inode.c:500)
==17821==    by 0x608E9BE: fuse_lookup (fuse-bridge.c:596)
==17821==    by 0x609BE8F: fuse_thread_proc (fuse-bridge.c:3182)
==17821==    by 0x4D820F9: start_thread (in /lib/libpthread-2.3.6.so)
==17821==    by 0x4F58CE1: clone (in /lib/libc-2.3.6.so)
==17821==

These continue to grow.

thanks
--mike terzo

Can you state the glusterfs version under use here?. There is a patch for inode invalidation which is under review.

Regards
Harshavardhana
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



--
Harshavardhana
http://www.gluster.com





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux