I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of
disk arrays connected on an Areca controller. I just added a new external array,
8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but
this seems to be a problem on this machine in general, on all file systems
(even, possibly, NFS, but I'm not sure about that one yet).
So, if I use iozone -a to test write speeds on the raw device, I get results in
the 500-800MB/sec range, depending on write sizes, which is about what I'd expect.
However, when I have an ext4 filesystem on this device, mounted with noatime and
data=writeback, (the filesystem is completely empty) and I test with dd, the
results are less encouraging:
dd bs=1M if=/dev/zero of=/Volumes/data_10-2/test.bin count=40000
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 292.288 s, 143 MB/s
Now, I'm not expecting to get the raw device speeds, but this seems at least to
be 2-3 times slower than what I'd expect.
Using conv=fsync oflag=direct makes it utterly pathetic:
dd bs=1M if=/dev/zero of=/Volumes/data_10-2/test.bin oflag=direct conv=fsync
count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 178.791 s, 29.3 MB/s
Now, I'm sure there can be many reasons for this, but I wonder where I should
start looking to debug this.
--
Joakim Ziegler - Supervisor de postproducción - Terminal
joakim@xxxxxxxxxxxxxx - 044 55 2971 8514 - 5264 0864
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos