Rheas, My setup is very similar to yours but I am not using io-threads on the client (only on servers) and I have 2 bricks. This is my top 4522 root 15 0 14812 5420 848 S 0.0 0.3 0:11.03 glusterfs Quite a difference. Harris ----- Original Message ----- From: "Rhesa Rozendaal" <gluster@xxxxxxxxx> To: "gluster-devel" <gluster-devel@xxxxxxxxxx> Sent: Thursday, July 5, 2007 10:57:41 AM (GMT-0500) America/New_York Subject: memory usage (client) Hi guys, I've been trying to limit glusterfs' memory consumption, but so far not much luck. here's a snapshot of my "top": 6697 root 15 0 369m 295m 876 S 45 14.6 3:10.13 [glusterfs] And it keeps growing, so I'm not sure where it'll settle. Is there anything I can do to keep it to around 100m? Here's my current client config (having played a lot with thread-count, cache-size, etc): volume ns type protocol/client option transport-type tcp/client option remote-host nfs-deb-03 option remote-subvolume ns end-volume volume client01 type protocol/client option transport-type tcp/client option remote-host nfs-deb-03 option remote-subvolume brick01 end-volume # snip client02 through client31 volume export type cluster/unify subvolumes client01 client02 client03 client31 option namespace ns option scheduler alu option alu.limits.min-free-disk 1GB option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage end-volume volume iothreads type performance/io-threads option thread-count 4 option cache-size 16MB subvolumes export end-volume volume readahead type performance/read-ahead option page-size 4096 option page-count 16 subvolumes iothreads end-volume volume writeback type performance/write-behind option aggregate-size 131072 option flush-behind on subvolumes readahead end-volume Rhesa _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel