> > > Machine 1: 72 MB/sec block write, 72MB/sec block read, 29 MB/sec block > rewrite. > Machine 2: 36 MB/sec block write, 72MB/sec block read, 21 MB/sec block > rewrite. > gluster-AFR: 22 MB/sec block write, 24 MB/sec block read, 9 MB/sec block > rewrite. > gluster-Unify (ALU scheduler): 21 MB/sec, 20 MB/sec block read, 8.8 > MB/sec block rewrite. > > Is this expected performance with gluster for a small number of nodes on > TCP/IP? Or am I missing some critical piece of configuration? In > particular, I thought that in an AFR config the client was supposed to > automatically stripe read requests across available volumes, but the > read performance doesn't seem to indicate that's happening, considering > the requests it sends to itself should be able to get close to its > normal ~70MB/sec rate. Are you using write-behind in the client volume spec? write-behind affects write performance significantly. AFR spreads different files to be read from different subvolumes, and not parts of a file. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20080919/d426750b/attachment.htm