On Mon, Jun 29, 2009 at 23:49, Barry Jaspan<barry.jaspan at acquia.com> wrote: > I just got started with glusterfs. ?I read the docs over the weekend and > today created a simple setup: two servers exporting a brick and one client > mounting them with AFR. I am seeing very poor write performance on a dd > test, e.g.: > > time dd if=/dev/zero of=./local-file bs=8192 count=125000 > > presumably due to a very large number of write operations (because when I > increase the blocksize to 64K, the performance increases by 2x). ?I enabled I didn't want to enter these threads because I may sound a bit pessimistic, but here's my experience with glusterfs. I needed a simple NFS replacement at the moment (but I guess it's just the same for any application with the exception that there aren't alternatives). With the kernel fuse module _everything_ was dirt poor, basically useless. Replacing it with the glusterFS patched one improved performance around 5-fold. Still, I have experienced useless write performance below 64k block size (3MB/s in contrast to 60MB/s). I have found no solution, apart from _not_ using writeback which slowed it down to the half speed (around 2MB/s). Read performance is excellent. I am about to use iSCSI (linux at both ends, software components only), which seem not to impose that problem. I guess it's not glusterFS but FUSE, but I see no workaround for it. -- byte-byte, grin