On 11/13/2009 06:54 PM, Dan Williams wrote: > On Mon, Nov 9, 2009 at 1:22 PM, Neil F Brown <nfbrown@xxxxxxxxxx> wrote: >> I'm certainly happy with increasing the chunksize to 512K. > > Probably good for reads, but it makes it harder for the code to > collect full stripe writes. I guess I should get some data to back > that up one of these days... My data (which I have, not that I need to get :-P) suggests that it really doesn't matter. For streaming writes, the buffer cache stores stuff up long enough to get a stripe write even when the stripe is huge. For random writes, you don't normally get a full stripe no matter how long you wait or how small the stripe is. I say this after looking at the various performance parameters of a timed 5 minute dbench run and also the random write time and rate of both 4k and 16k tiotest runs to raid arrays from 4 to 7 disks and with chunk sizes from 256k up to 1024k using ext2, ext3, ext4, and xfs filesystems. From those test results, 512k was roughly the sweet spot, streaming writes were effected far more than random writes by chunk size, and both were probably even more dependent on things other than chunk size (filesystem type and layout for instance). -- Doug Ledford <dledford@xxxxxxxxxx> GPG KeyID: CFBFF194 http://people.redhat.com/dledford Infiniband specific RPMs available at http://people.redhat.com/dledford/Infiniband
Attachment:
signature.asc
Description: OpenPGP digital signature