Hi Niall, I would surely suggest the performance optimization for your setup. > My write test is: > > dd if=/dev/zero of=/mnt/stripe/big.file bs=8M count=10000 > can you try with bs=1MB count=80000 ? > > and my read test is > > dd if=/mnt/stripe/big.file of=/dev/null bs=8M same here, bs=1M? > > server1 (8 cores, 16GB memory) > ------------------------------------------- > > volume posix > type storage/posix > option directory /big > end-volume > > volume brick > type performance/io-threads > option thread-count 8 > subvolumes posix > end-volume > > volume server > type protocol/server > subvolumes brick > option transport-type tcp/server # For TCP/IP transport > option auth.ip.brick.allow * > subvolumes brick > end-volume > > > server2 (8 cores, 16GB memory) > ------------------------------------------- > > volume posix > type storage/posix > option directory /big > end-volume > > volume brick > type performance/io-threads > option thread-count 8 > option cache-size 4096MB > subvolumes posix > end-volume > > volume server > type protocol/server > subvolumes brick > option transport-type tcp/server # For TCP/IP transport > option auth.ip.brick.allow * > subvolumes brick > end-volume > NOTE: I changed the name of volumes, as iothreads was not used here at all. (make sure exported volume name is the one you really want to export). > > client (16 cores, 64GB memory) > ------------------------------------------- > > volume jr1 type protocol/client option transport-type tcp/client option remote-host 192.168.3.2 option remote-subvolume brick end-volume volume jr2 type protocol/client option transport-type tcp/client option remote-host 192.168.2.2 option remote-subvolume brick end-volume volume stripe0 type cluster/stripe option block-size *:1MB subvolumes readahead-jr1 readahead-jr2 end-volume volume iot type performance/io-threads subvolumes stripe0 end-volume volume writebehind type performance/write-behind subvolumes iot end-volume volume readahead type performance/read-ahead option page-size 1MB option page-count 2 subvolumes writebehind end-volume --- NOTE: please make sure the spec file is visualized as a tree of layers, and you have made the proper tree by 'subvolumes' Can you try with these spec files and let me know the results? Also, my doubt is if you have 10Gig/E, how will you get 1.5GBps from single client? Regards, Amar -- Amar Tumballi Gluster/GlusterFS Hacker [bulde on #gluster/irc.gnu.org] http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!