On Wed, 23 Apr 2014 10:02:00 -0700 Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > On Wed, Apr 23, 2014 at 12:07 AM, NeilBrown <neilb@xxxxxxx> wrote: > > On Fri, 11 Apr 2014 17:41:12 +0530 "Manibalan P" <pmanibalan@xxxxxxxxxxxxxx> > > wrote: > > > >> Hi Neil, > >> > >> Also, I found the data corruption issue on RHEL 6.5. > >> > >> For your kind attention, I up-ported the md code [raid5.c + raid5.h] > >> from FC11 kernel to CentOS 6.4, and there is no mis-compare with the > >> up-ported code. > > > > This narrows it down to between 2.6.29 and 2.6.32 - is that correct? > > > > So it is probably the change to RAID6 to support async parity calculations. > > > > Looking at the code always makes my head spin. > > > > Dan : have you any ideas? > > > > It seems that writing to a double-degraded RAID6 while it is recovering to > > a space can trigger data corruption. > > > > 2.6.29 works > > 2.6.32 doesn't > > 3.8.0 still doesn't. > > > > I suspect async parity calculations. > > I'll take a look. I've had cleanups of that code on my backlog for "a > while now (TM)". Hi Dan, did you have a chance to have a look? I've been consistently failing to find anything. I have a question though. If we set up a chain of async dma handling via: ops_run_compute6_2 then ops_bio_drain then ops_run_reconstruct is it possible for the ops_complete_compute callback set up by ops_run_compute6_2 to be called before ops_run_reconstruct has been scheduled or run? If so, there seems to be some room for confusion over the setting for R5_UPTODATE on blocks that are being computed and then drained to. Both will try to set the flag, so it could get set before reconstruction has run. I can't see that this would cause a problem, but then I'm not entirely sure why we clear R5_UPTODATE when we set R5_Wantdrain. NeilBrown
Attachment:
signature.asc
Description: PGP signature