On Mon, 23 Feb 2015 02:36:18 +0100 Christer Solskogen <christer.solskogen@xxxxxxxxx> wrote: > On 22.02.2015 22:53, NeilBrown wrote: > > > There is no "Optimal" without reference to a particular work load. Or > > particular hardware. > > > > Do you know of such a reference? I mean, some stats that show type of > workload / chunk size. The only one I've found is the 5 year old > benchmark that was done ( > http://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.html) > - which shows that under benchmarking with dd that 64 is preferred. > Interesting graphs ... but when you see a big jump like they show between 64 and 128K chunk sizes for RAID5/6, that doesn't mean "64K is better" but "something strange is happening here". My guess is that read-ahead is working very well for some reason. If your actually workload is writing 10GB files with 'dd', then the graphs might be useful. For other workloads ... it's hard to tell. Nothing beats performing your own tests on your own hardware with your own choice of filesystem and getting your own results. I did some tests myself recently (which I really want to automate and turn into web pages etc ... one day). For RAID5 on 4 drives I used chunk sizes of 4, 16, 64, 256, 1024 and applied a variety of fio loads use XFS. The only load that showed significant variation of chunk sizes was sequential read which gets generally faster with larger chunk sizes, though for some layouts (I tried la, ls, ra, rs) 1024k chunks were worse than 256k. So any reference you find will probably lead you astray. NeilBrown
Attachment:
pgpQ6qe4CwYgR.pgp
Description: OpenPGP digital signature