Hi all, Just doing some initial testing on glusterfs (1.3.10, Debian packages), and I'm somewhat underwhelmed with the performance. I setup up a test AFR and a test Unify config with two systems connected by a local, managed gigabit switch. My configs have POSIX locking, read-ahead, write-behind, and threaded i/o enabled (in that order) on the server side. I then compared bonnie output on the raw filesystems to the gluster output. Machine 1: 72 MB/sec block write, 72MB/sec block read, 29 MB/sec block rewrite. Machine 2: 36 MB/sec block write, 72MB/sec block read, 21 MB/sec block rewrite. gluster-AFR: 22 MB/sec block write, 24 MB/sec block read, 9 MB/sec block rewrite. gluster-Unify (ALU scheduler): 21 MB/sec, 20 MB/sec block read, 8.8 MB/sec block rewrite. The file operation speeds on the initial machines were in thousands to tens of thousands of operations a second. On both glusterfs configs they were in the hundreds of ops/sec. The client I was testing on was Machine 1, since it had the higher overall performance and was under less load. Is this expected performance with gluster for a small number of nodes on TCP/IP? Or am I missing some critical piece of configuration? In particular, I thought that in an AFR config the client was supposed to automatically stripe read requests across available volumes, but the read performance doesn't seem to indicate that's happening, considering the requests it sends to itself should be able to get close to its normal ~70MB/sec rate. Any tips would be appreciated. :) Thanks! Graeme