We have production servers that have 8GB memory on each client and server. But we found that running multiple processes in same client machine will degrade the total IO throughput. here is the testing case: Reading 10,000 random files using only one process: 45seconds Open 2 processes, each to read 10,000 random files: Process 1 took 98 seconds, process 2 took 102 seconds. using 'top' to check the system resource, on server side, the system is 97% idle, on client side it is 85% idle. Any idea why gluster client can not handle multiple processes to improve aggregated IO throughput? I am using ver 2.00rc1. any optimization idea is appreciated. thanks Watishi -- client configuration file volume remote_1 type protocol/client option transport-type tcp/client option remote-host server1 option remote-subvolume volume_1-brick end-volume volume remote_2 type protocol/client option transport-type tcp/client option remote-host server2 option remote-subvolume volume_2-brick end-volume volume remote_3 type protocol/client option transport-type tcp/client option remote-host server3 option remote-subvolume volume_3-brick end-volume volume remote_4 type protocol/client option transport-type tcp/client option remote-host server4 option remote-subvolume volume_4-brick end-volume volume bricks type cluster/distribute subvolumes remote_1 remote_2 remote_3 remote_4 end-volume --server gluster configure file volume volume_2-posix type storage/posix option directory /data/export/volume_2 end-volume volume volume_2-locks type features/locks subvolumes volume_2-posix end-volume volume volume_2-brick type performance/io-threads option thread-count 9 subvolumes volume_2-locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.volume_2-brick.allow * subvolumes volume_2-brick end-volume -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090428/27482977/attachment.htm>