You can reboot the machine. About the 100megz its virtual-address usage which is very high due to io-threads. you can see the difference by reducing the thread-count option of io-threads. Otherwise in real they consume approximately 2-3MB when started. -bulde On 7/5/07, Dale Dude <dale@xxxxxxxxxxxxxxx> wrote:
Thanks for the work ;) Is it safe to reboot that box? Wanna try 270 on it. Trying 270 on another similar setup with one brick of 5tb but memory is about 600megz, but the rsync is still running so Ill wait on a decision on that one. glusterfsd and glusterfs seem to start with 100megz each now. Is that expected? Regards, Dale Amar S. Tumballi wrote: > Hi Dale, > Now with patch-270, memory leak should not be seen. A big thanks for > giving access to your machine, as it was hard for me to replicate the > same setup here. > > -bulde > > On 7/3/07, *Dale Dude* <dale@xxxxxxxxxxxxxxx > <mailto:dale@xxxxxxxxxxxxxxx>> wrote: > > Im running 3 rsyncs in parallel and glusterfs gets to about > 400megz. Is > glusterfs supposed to grow so large? It seems to reduce memory > (sometimes) when the rsync is done. > > CMD COUNT USER-TIME SYS-TIME MEM-TOTAL > glusterfs 1 9m53s 2m32s 401.73 MiB > rsync 4 11s 25s 337.90 MiB > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx> > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > > -- > Amar Tumballi > http://amar.80x25.org > [bulde on #gluster/irc.gnu.org] _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- Amar Tumballi http://amar.80x25.org [bulde on #gluster/irc.gnu.org]