memory leaks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In my setup, it appears that the performance translators are all well-behaved on the server-side, unlike the client-side. Hopefully, this will provide some useful clues...

Chaining all the performance translators off of my protocol/server volume, all the translators seem to load on glusterfsd and they don't appear to be causing any harm. data-only transfers don't trigger a huge memory leak in readahead, and metadata transfers appear to cause a memory leak in glusterfsd only at the usual rate (it grows slowly whether or not I use performance translators). io-thread does not cause glusterfsd to die.

Do I chain the performance translators for the server the same way as for the client? E.g.:

volume server
  type protocol/server
subvolumes share0 share1 share2 share3 share4 share5 share6 share7 share8 share9 share10 share11 share12 share13 share14 share15
  ...
end-volume

volume statprefetch
  type performance/stat-prefetch
  option cache-seconds 2
  subvolumes server
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 131072 # in bytes
  subvolumes statprefetch
end-volume

volume readahead
  type performance/read-ahead
  option page-size 65536 ### in bytes
  option page-count 16 ### memory cache size is page-count x page-size per file#
  subvolumes writebehind
end-volume

volume iot
  type performance/io-threads
  option thread-count 8
  subvolumes readahead
end-volume

Is that correct/appropriate?

Thanks,

Brent




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux