Re: RAID6 - RMW logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 31 Jul 2014 06:43:52 +0000 Markus Stockhausen
<stockhausen@xxxxxxxxxxx> wrote:

> > Von: linux-raid-owner@xxxxxxxxxxxxxxx 
> > Gesendet: Mittwoch, 30. Juli 2014 23:30
> > An: Markus Stockhausen
> > Cc: linux-raid@xxxxxxxxxxxxxxx
> > Betreff: Re: RAID6 - RMW logic
> > 
> > On Wed, 30 Jul 2014 20:24:30 +0000 Markus Stockhausen
> > <stockhausen@xxxxxxxxxxx> wrote:
> > 
> > > Hi,
> > >
> > > the last days I tried to understand the RAID6 logic when recalculating
> > > P/Q parity (or syndrome) if only parts of a stripe are updated. As far
> > > ...
> >
> > Please see
> >   http://comments.gmane.org/gmane.linux.raid/42559
> > 
> > Yes, this is something we probably want.
> > The previous effort stalled somehow.  Maybe it just needs someone to start
> > pushing again.
> > 
> > NeilBrown
> 
> Hi,
> 
> thanks for the link. Crawling through the modifcation I isolated two steps
> that we must achieve in first place to get it on track. I'm far away from
> implementing a full patch so I focus on what I understand.
> 
> 1) Implement a generic switch so we can configure rmw/rcw handling
> on the fly. Without any RAID6 rmw patches yet it will simply focus on the
> current RAID5 implementation. Later on RAID6 can use it too and we
> are able to  compare rmw versus rcw performance in all cases.
> I would name the parameter enable_rmw and default it to 1. In RAID6 case 
> it will be ignored.
> 
> -> Ok with that?

No, sorry.  Or not very.

In that email thread I pointed you to I wrote:

- Can you  explain *why* rcw is sometimes better than rmw even on large
  arrays? Even a fairly hand-wavy arguement would help.  And it would go in
  the comment at the top of the patch that adds enable_rmw.


I see you've posted a patch, but there is no "why".
I don't like adding configuration options.  If there is some clear and easy
to understand benefit, like "this trades throughput against latency", then I
might be able to live with one, because it would be easy to tell people how
to tune it.

Why would I ever disable rmw?  Don't say "choose the option that performs best
for your workload", because that is nearly meaningless: workloads change from
moment to moment.  If rwm is good in some cases and bad in others, then we
should at least make sure we understand why, and then hopefully get the md
driver  to auto-detect the different cases.

There might be a case for allowing an option like that to support a
"developer only preview" of the code.  i.e.  Add the rmw-for-RAID6 code, find
that is slows down some workloads, get confused about why, ask for help,
people are only happy to  test if it is in mainline, so use a developer-only
config option.
Then at least I could tell people when to turn it on: only if you are a
developer.

NeilBrown


> 
> 2) The previous patch was quite tricky with handling the P/Q calculation.
> It combined a gen_syndrome run with 2 extra xor runs. Additionally it
> saved P/Q delta in spare pages. Maybe to avoid patching gen_syndrome 
> functions. I understand the discussion that the flag "subtract" and the handling
> of the second spare page is not easy to understand. As explained in my
> last mail I would enhance the syndrome functions with the option to XOR the 
> target P/Q pages instead of only storing the calculated values. This would allow 
> to work the same way the RAID5 shortcut does. See ops_run_prexor that uses 
> parity page to store the interim result.
> 
> >From a performance perspective I would write separate logic in /lib/raid6 
> by copying the existing functions. Another approach could be to change 
> all gen_syndrome functions to xor the destination page - and empty the
> target page in advance for the rcw case. In all cases this patch will be quite 
> huge but I should be easy to understand and to verify.
> 
> -> Suggestions?
> 
> Best regards.
> 
> Markus

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux