What worked at 100MB/sec w/ gluster 3.0 is now 1.2MB/sec with 3.0.3. I have 2 5 host stripes, replicated. (I'm aware that this config is unsupported) Just updated from 3.0 to 3.0.3. While my IOR tests across 8 clients come out fine (slightly better than w/ 3.0) - dd performance has gone to crap. While this is fine I guess as long as applications still perform well, I'm curious as to why dd is affected this way. (it used to be ~100MB/sec w/ 3.0, now is 1.2MB/sec w/ 3.0.3) Jeremy [jenos at ac31 IOR-2.10.1]$ ./runior.sh -t 1m -b 1g IOR-2.10.1: MPI Coordinated Test of Parallel I/O Run began: Thu Mar 18 17:58:02 2010 Command line used: ./IOR -a POSIX -q -wr -m -C -F -e -N 8 -t 1m -b 1g -o /scratch/jenos/iortest Machine: Linux ac31.ncsa.uiuc.edu Summary: api = POSIX test filename = /scratch/jenos/iortest access = file-per-process ordering = sequential offsets clients = 8 (1 per node) repetitions = 1 xfersize = 1 MiB blocksize = 1 GiB aggregate filesize = 8 GiB access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) iter ------ --------- ---------- --------- -------- -------- -------- ---- write 180.90 1048576 1024.00 0.072115 45.25 20.07 0 read 980.56 1048576 1024.00 0.032333 8.34 3.56 0 Max Write: 180.90 MiB/sec (189.68 MB/sec) Max Read: 980.56 MiB/sec (1028.19 MB/sec) Run finished: Thu Mar 18 17:58:57 2010 [jenos at ac31 IOR-2.10.1]$ dd conv=fsync if=/dev/zero of=/scratch/jenos/bigfile bs=1024 count=1000000 ^C489835+0 records in 489835+0 records out 501591040 bytes (502 MB) copied, 421.996 s, 1.2 MB/s