Re: [patch 1/2]RAID5: make stripe size configurable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 8 Jul 2014 09:00:18 +0800 Shaohua Li <shli@xxxxxxxxxx> wrote:

> 
> stripe size is 4k default. Bigger stripe size is considered harmful, because if
> IO size is small, big stripe size can cause a lot of unnecessary IO/parity
> calculation. But if upper layer always sends full stripe write to RAID5 array,
> this drawback goes away. And bigger stripe size can improve performance
> actually in this case because of bigger size IO and less stripes to handle. In
> my full stripe write test case, 16k stripe size can improve throughput 40% -
> 120% depending on RAID5 configuration.

Hi,
 certainly interesting.
 I'd really like to see more precise numbers though.  What config gives 40%,
 what config gives 120% etc.

 I'm not keen on adding a number that has to be tuned though.  I'd really
 like to understand exactly where the performance gain comes from.
 Is it that the requests being sent down are larger, or just better managed -
 or is it some per-stripe_head overhead that is being removed.

 e.g. if we sorted the stripe_heads and handled them in batches of adjacent
 addresses, might that provide the same speed up?

 I'm certain there  is remove for improving the scheduling of the
 stripe_heads, I'm just not sure what the right approach is though.
 I'd like to explore that more before make the stripe_heads bigger.

 Also I really don't like depending on multi-page allocations.  If we were
 going to go this way I think I'd want an array of single pages, not a
 multi-page.


Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux