Hi, I'm currently evaluating gluster with the intention of replacing our current setup and have a few questions: At the moment, we have a large SAN which is split into 10 partitions and served out via NFS. For gluster, I was thinking 12 nodes to make up about 6TB (mirrored so that's 1TB per node) and served out using gluster. What sort of filesystem should I be using for the nodes (currently on ext3) to give me the best performance and recoverability? Also, I setup a test with a simple mirrored pair with a client that looks like: volume glust3 type protocol/client option transport-type tcp/client option remote-host glust3 option remote-port 6996 option remote-subvolume brick end-volume volume glust4 type protocol/client option transport-type tcp/client option remote-host glust4 option remote-port 6996 option remote-subvolume brick end-volume volume mirror1 type cluster/replicate subvolumes glust3 glust4 end-volume volume writebehind type performance/write-behind option window-size 1MB subvolumes mirror1 end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume I ran a basic test by writing 1G to an NFS server and this gluster pair: [root at glust1 ~]# time dd if=/dev/zero of=/mnt/glust2_nfs/nfs_test bs=65536 count=15625 15625+0 records in 15625+0 records out 1024000000 bytes (1.0 GB) copied, 1718.16 seconds, 596 kB/s real 28m38.278s user 0m0.010s sys 0m0.650s [root at glust1 ~]# time dd if=/dev/zero of=/mnt/glust/glust_test bs=65536 count=15625 15625+0 records in 15625+0 records out 1024000000 bytes (1.0 GB) copied, 3572.31 seconds, 287 kB/s real 59m32.745s user 0m0.010s sys 0m0.010s With it taking almost twice as long, can I expect this sort of performance degradation on 'real' servers? Also, what sort of setup would you recommend for us? Can anyone help? Thanks, Josh.