On Wed, Dec 2, 2015 at 9:51 AM, Phil Turmel <philip@xxxxxxxxxx> wrote: > On 12/02/2015 10:44 AM, Robert Kierski wrote: >> I've tried a variety of settings... ranging from 17 to 32768. >> >> Yes.. with stripe_cache_size set to 17, I see a C/T of rmw's. And my TP goes in the toilet -- even with the RAM disks, I get only about 30M/s. > > Ok. > > You mentioned you aren't using a filesystem. How are you testing? > > Phil > > ps. convention on kernel.org is to trim replies and bottom-post, or > interleave. Please do. Thank you all for your responses. Keld, > Did you test the performance of other raid types, such as RAID1 and the various layouts of RAID10 for the newer kernels? I did try RAID 1 but not RAID 10. With RAID 1 I am seeing much higher average and peak wMB/s and disk utilization than with RAID 5 and 6. Though I need to run some more tests to compare the performance of newer kernels with the 2.6.39.4 kernel. Will report on that a bit later. Roman, > Do you use a write intent bitmap (internal?), what is your bitmap chunk size? Yes, I do. After reading up on this, I see that it can negatively affect write performance. The bitmap chunk size is 67108864. > What is your stripe_cache_size set to? strip_cache_size is 8192 Robert, like you I am observing that my CPU is mostly idle during RAID 5 or 6 write testing. Something else is throttling the traffic. Not sure if there is some threshold crossing i.e. queue size, await time etc that is causing this or if it is implementation problem. I understand that the stripe cache grows dynamically in >= 4.1 kernels. Fwiw, adjusting the stripe cache made no difference in my results. Regards, Dallas -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html