> FWIW, using smaller bitmaps REALLY helps. Tests forthcoming. I accidentaly thrashed the original script output for the first test so all I can offer is a spreadsheet (or not, 1.test should have most of the data): 1. create array with given chunk size and bitmap chunk size and --assume-clean 2. dd write and read 4GB (2x RAM) directly on the device (no -oflag direct, though) 3. create small ~13 GB fs at the start of the array 4. mount without options and run bonnie++ 5. lather, rinse, repeat Second test is the same as the first, only 1. create a smaller array with -z (4GB / disk so 12GB usable), let it sync and set stripe_cache_size to 8192 2. ... 3. ... 4. mount with noatime, nodiratime 5. ... Better. Much better. Now safely out of the realm of error and into that of tuning. Still a bottleneck somewhere ... If 50% in bonnie++ means 100% of one CPU that could be it. Any comments on the CPU results (text version only)? Results are single-shot (not averaged) so the data is relatively low quality. I didn't notice any responsiveness issues during the tests, but then again I left the machine pretty much alone. Will tune first and tackle that later. FWIW background resync alone isn't the culprit - that doesn't even hurt benchmarks too badly. Maybe background sync + large bitmap? The HSM violation hasn't yet cropped up again. What does it mean exactly? Also, aside from the fact that NCQ should apparently be turned off for md raid anyway - why doesn't it work? The Promise SATA2 TX4 allegedly does it, as do the disks. Thanks, C.
Attachment:
raid5-bench.ods
Description: application/vnd.oasis.opendocument.spreadsheet
Attachment:
1.test
Description: Binary data
Attachment:
2.test
Description: Binary data