Re: [patch 1/2]RAID5: make stripe size configurable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 10, 2014 at 03:39:36PM +1000, NeilBrown wrote:
> On Tue, 8 Jul 2014 09:00:18 +0800 Shaohua Li <shli@xxxxxxxxxx> wrote:
> 
> > 
> > stripe size is 4k default. Bigger stripe size is considered harmful, because if
> > IO size is small, big stripe size can cause a lot of unnecessary IO/parity
> > calculation. But if upper layer always sends full stripe write to RAID5 array,
> > this drawback goes away. And bigger stripe size can improve performance
> > actually in this case because of bigger size IO and less stripes to handle. In
> > my full stripe write test case, 16k stripe size can improve throughput 40% -
> > 120% depending on RAID5 configuration.
> 
> Hi,
>  certainly interesting.
>  I'd really like to see more precise numbers though.  What config gives 40%,
>  what config gives 120% etc.

A 7-disk raid5 array gives 40%, and a 16-disk PCIe raid5 array gives 120. All
use pcie SSD and do full stripe write. And I observed cpu usage drops too. For
example, in the 7-disk array, cpu utilization drops about 20%.

On the other hand, small size write performance drops a lot, which isn't a surprise.

>  I'm not keen on adding a number that has to be tuned though.  I'd really
>  like to understand exactly where the performance gain comes from.
>  Is it that the requests being sent down are larger, or just better managed -
>  or is it some per-stripe_head overhead that is being removed.

>From perf, I saw handle_stripe overhead drops, and some lock contentions get
reduced too because we have less stripes. From iostat, I saw request size gets
bigger.

> 
>  e.g. if we sorted the stripe_heads and handled them in batches of adjacent
>  addresses, might that provide the same speed up?

I tried before. Increasing batch in handle_active_stripes can increase request
size, but we still have big overhead to handle stripes.

>  I'm certain there  is remove for improving the scheduling of the
>  stripe_heads, I'm just not sure what the right approach is though.
>  I'd like to explore that more before make the stripe_heads bigger.
> 
>  Also I really don't like depending on multi-page allocations.  If we were
>  going to go this way I think I'd want an array of single pages, not a
>  multi-page.

Yep, that's easy to fix. I'm using multi-page and hope IO segment size is
bigger. Maybe not worthy, considering we have skip_copy?

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux