On Thu, 17 Jun 2010 09:49:42 -0400 aragonx@xxxxxxxxxx wrote: > Prior to the change above, on a 2GB file, I would start off the write (to > the server) at 70MB/sec and end at about 35MB/sec. CPU usage was at 100% > with the md0 using about 70% CPU and smb using 30% with flush sometimes > jumping in at 30%. Wait states remained below 10%. After the change, on > a 2GB file I would start the write at 70MB/sec and end about 55MB/sec > (nice improvement!). A more consistent way to test would be to cd into a directory on the array, and repeatedly run something like: dd if=/dev/zero of=zerofile bs=1M count=2048 conv=fdatasync,notrunc ...and implement various tweaks you are trying out between the runs, to see their effect. Also, the reason you see the write speed dropping off in the end, is because your server first fills up its write cache almost at the maximum attainable sender's (and network) speed, then, as the space in RAM for that cache runs out, starts flushing it to disk, reducing the rate at which it receives new data from the network. So you see that the 70 MB/sec figure is totally unrelated to the RAID's performance. The dd test described above, thanks to these "conv" flags (see the dd man page) will have much more sense as a benchmark. -- With respect, Roman
Attachment:
signature.asc
Description: PGP signature