Memory leak with force merge ? gluster client takes 3gig of ram fast ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 It was a bit big to send to the mailling list you can download it at :
http://www.openrapids.net/~hexa/glusterdump.12610.gz

You can also find my complete log at :
http://www.openrapids.net/~hexa/test-gluster.log.gz

I tryed disabling  statprefetch .. it got rid of the looping log but I
stilll got 3gig of memory used..

Also you should note that my setup is composed of many directories with
small files < 50k .. if it helps..
(About 8Gigs total)

Thank you very much!

Antoine

On Wed, Dec 1, 2010 at 2:06 AM, Raghavendra G <raghavendra at gluster.com>wrote:

> Hi Antoine,
>
> Can you take statedump of glusterfs when it is consuming huge amounts of
> memory? You can get the statedump of glusterfs using,
>
> # kill -SIGUSR1 <glusterfs-pid>
>
> and the statedump can be found as /tmp/glusterdump.<glusterfs-pid>
>
> regards,
> Raghavendra.
> ----- Original Message -----
> From: "Antoine Tremblay" <hexa00 at gmail.com>
> To: gluster-users at gluster.org
> Sent: Tuesday, November 30, 2010 7:41:45 PM
> Subject: Memory leak with force merge ? gluster client
> takes    3gig of ram fast ?
>
> Hi,
>
>  I'm running GlusterFS  3.0.4 built and I have this weird behavior where
> each morning it seems the gluster client has a situation which makes it
> take
> 3Gigs of RAM (close to the 32bit limit , and I run on 32bits)
>
> VmPeak: 3063696 kB
> VmSize: 3062672 kB
> VmLck:       0 kB
> VmHWM: 3000196 kB
> VmRSS: 3000196 kB
> VmData: 3059952 kB
> VmStk:      88 kB
> VmExe:      28 kB
> VmLib:    2508 kB
> VmPTE:    5972 kB
> Threads: 3
>
>
> If I check the logs I see :
>
> [2010-11-30 06:39:37] D [afr-dir-read.c:163:afr_examine_dir_readdir_cbk]
> mirror-0: checksums of directory / differ, triggering forced merge
> [2010-11-30 06:39:37] D
> [afr-self-heal-entry.c:2298:afr_sh_entry_sync_prepare] mirror-0: no active
> sources for / found. merging all entries as a conservative decision
> [2010-11-30 06:39:37] D [stat-prefetch.c:3843:sp_release] statprefetch:
> cache hits: 0, cache miss: 0
> [2010-11-30 06:39:37] D [stat-prefetch.c:3843:sp_release] statprefetch:
> cache hits: 0, cache miss: 0
> [2010-11-30 06:39:37] D [stat-prefetch.c:3843:sp_release] statprefetch:
> cache hits: 0, cache miss: 0
>
> This last message is repeated a lot like > 1000 times like if it was in a
> loop...
>
> Any ideas ?
>
> Here's my config :
>
> iven volfile:
>
> +------------------------------------------------------------------------------+
>  1: ## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
>  2: # Cmd line:
>  3: # $ /usr/bin/glusterfs-volgen --raid 1 --name data web01:/srv/data
> web02:/srv/data
>  4:
>  5: # RAID 1
>  6: # TRANSPORT-TYPE tcp
>  7: volume web02-1
>  8:     type protocol/client
>  9:     option transport-type tcp
>  10:     option remote-host web02
>  11:     option transport.socket.nodelay on
>  12:     option transport.remote-port 6996
>  13:     option remote-subvolume brick1
>  14: end-volume
>  15:
>  16: volume web01-1
>  17:     type protocol/client
>  18:     option transport-type tcp
>  19:     option remote-host web01
>  20:     option transport.socket.nodelay on
>  21:     option transport.remote-port 6996
>  22:     option remote-subvolume brick1
>  23: end-volume
>  24:
>  25: volume mirror-0
>  26:     type cluster/replicate
>  27:     subvolumes web01-1 web02-1
>  28: end-volume
>  29:
>  30: volume readahead
>  31:     type performance/read-ahead
>  32:     option page-count 4
>  33:     subvolumes mirror-0
>  34: end-volume
>  35:
>  36: volume iocache
>  37:     type performance/io-cache
>  38:     option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
> 's/[^0-9]//g') / 5120 ))`MB
>  39:     option cache-timeout 1
>  40:     subvolumes readahead
>  41: end-volume
>  42:
>  43: volume quickread
>  44:     type performance/quick-read
>  45:     option cache-timeout 1
>  46:     option max-file-size 64kB
>  47:     subvolumes iocache
>  48: end-volume
>  49:
>  50: volume writebehind
>  51:     type performance/write-behind
>  52:     option cache-size 4MB
>  53:     subvolumes quickread
>  54: end-volume
> 55:
>  56: volume statprefetch
>  57:     type performance/stat-prefetch
>  58:     subvolumes writebehind
>  59: end-volume
>  60:
>
>
> Thanks a lot
>
>
> Antoine Tremblay
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux