Re: Adding more drives/saturating the bandwidth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Richard Scobie <richard@xxxxxxxxxxx> writes:

> Goswin von Brederlow wrote:
>
>>
>> Now think about the same with 6 disk raid5. Suddenly you have partial
>> stripes. And the alignment on stripe boundaries is gone too. So now
>> you need to read 384k (I think) of data, compute or delta (whichever
>> requires less reads) the parity and write back 384k in 4 out of 6
>> cases and read 64k and write back 320k otherwise. So on average you
>> read 277.33k and write 362.66k (= 640k combined). That is twice the
>> previous bandwidth not to mention the delay for reading.
>>
>> So by adding a drive your throughput is suddenly halfed. Reading in
>> degraded mode suffers a slowdown too. CPU goes up too.
>>
>>
>> The performance of a raid is so much dependent on its access pattern
>> that imho one can not talk about a general case. But note that the
>> more drives you have the bigger a stripe becomes and you need larger
>> sequential writes to avoid reads.
>
> I take your point, but don't filesystems like XFS and ext4 play nice
> in this scenario by combining multiple sub-stripe writes into stripe
> sized writes out to disk?
>
> Regards,
>
> Richard

Some FS have a parameter to tune to the stripe size. If that actually
helps or not I leave for you to test.

But ask yourself: Have any a tool to retune after you've grown the raid?

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux