Hi everyone, I am seeing slower-than-expected performance in Gluster 3.2.3 between 4 hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu 10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian Squeeze - same results on those as well). All of the hosts mount the volume using the FUSE module. The base filesystem on all of the nodes is XFS, however tests with ext4 have yielded similar results. Command used to create the volume: gluster volume create cluster-volume replica 2 transport tcp node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/ node04:/mnt/local-store/ Command used to mount the Gluster volume on each node: mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume Creating a 40GB file onto a node's local storage (ie no Gluster involvement): dd if=/dev/zero of=/mnt/local-store/test.file bs=1M count=40000 41943040000 bytes (42 GB) copied, 92.9264 s, 451 MB/s Getting the same file off the node's local storage: dd if=/mnt/local-store/test.file of=/dev/null 41943040000 bytes (42 GB) copied, 81.858 s, 512 MB/s 40GB file onto the Gluster storage: dd if=/dev/zero of=/mnt/cluster-volume/test.file bs=1M count=40000 41943040000 bytes (42 GB) copied, 226.934 s, 185 MB/s Getting the same file off the Gluster storage dd if=/mnt/cluster-volume/test.file of=/dev/null 41943040000 bytes (42 GB) copied, 661.561 s, 63.4 MB/s I have also tried using Gluster 3.1, with similar results. According to the Gluster docs, I should be seeing roughly the lesser of the drive speed and the network speed. The network is able to push 0.9GB/sec according to iperf so that definitely isn't a limiting factor here, and each array is able to do 400-500MB/sec as per above benchmarks. I've tried with/without jumbo frames as well, which doesn't make any major difference. The glusterfs process is using 120% CPU according to top, and glusterfsd is sitting at about 90%. Any ideas / tips of where to start for speeding this config up? Thanks, Thomas