Re: RAID 5,6 sequential writing seems slower in newer kernels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 2, 2015 at 1:50 PM, Dallas Clement
<dallas.a.clement@xxxxxxxxx> wrote:
> On Wed, Dec 2, 2015 at 9:51 AM, Phil Turmel <philip@xxxxxxxxxx> wrote:
>> On 12/02/2015 10:44 AM, Robert Kierski wrote:
>>> I've tried a variety of settings... ranging from 17 to 32768.
>>>
>>> Yes.. with stripe_cache_size set to 17, I see a C/T of rmw's.  And my TP goes in the toilet -- even with the RAM disks, I get only about 30M/s.
>>
>> Ok.
>>
>> You mentioned you aren't using a filesystem.  How are you testing?
>>
>> Phil
>>
>> ps. convention on kernel.org is to trim replies and bottom-post, or
>> interleave.  Please do.
>
> Thank you all for your responses.
>
> Keld,
>
>> Did you test the performance of other raid types, such as RAID1 and the various layouts of RAID10 for the newer kernels?
>
> I did try RAID 1 but not RAID 10.  With RAID 1 I am seeing much higher
> average and peak wMB/s and disk utilization than with RAID 5 and 6.
> Though I need to run some more tests to compare the performance of
> newer kernels with the 2.6.39.4 kernel.  Will report on that a bit
> later.
>
> Roman,
>
>> Do you use a write intent bitmap (internal?), what is your bitmap chunk size?
>
> Yes, I do.  After reading up on this, I see that it can negatively
> affect write performance.  The bitmap chunk size is 67108864.
>
>> What is your stripe_cache_size set to?
>
> strip_cache_size is 8192
>
> Robert, like you I am observing that my CPU is mostly idle during RAID
> 5 or 6 write testing.  Something else is throttling the traffic.  Not
> sure if there is some threshold crossing i.e. queue size, await time
> etc that is causing this or if it is implementation problem.
>
> I understand that the stripe cache grows dynamically in >= 4.1
> kernels.   Fwiw, adjusting the stripe cache made no difference in my
> results.
>
> Regards,
>
> Dallas

Here is a summary of the performance differences I am seeing with the
3.10.69 kernel vs the 2.6.39.4 kernel (baseline):

RAID 0

bs = 512k - 3.5% slower
bs = 2048k - 1.5% slower

RAID 1

bs = 512k - 35% faster
bs = 2048k - 48% faster

RAID 5

bs = 512k - 22% slower
bs = 2048k - 28% slower

RAID 6

bs = 512k - 24% slower
bs = 2048k - 30% slower

Surprisingly  RAID 1 is faster in the newer kernel, but RAID 5 & 6 much slower.

All measurements computed from bandwidth averages taken on 12 disk
array with XFS filesytem using fio with direct=1, sync=1,
invalidate=1.

Seems incredulous!?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux