On Wed, 15 Sep 2010 17:49:44 -0400 Mike Hartman <mike@xxxxxxxxxxxxxxxxxxxx> wrote: > >> Hmmm.. > >> ÂCan you try mounting with > >> Â Â-o barrier=0 > >> > >> just to see if my theory is at all correct? > >> > >> Thanks, > >> NeilBrown > >> > > > > Progress report: > > I made the barrier change shortly after sending my last message (about > 40 hours ago). With that in place, I was able to finish emptying one > of the non-assimilated drives onto the array, after which I added that > drive as a hot spare and started the process to grow the array onto it > - the same procedure I've been applying since I created the RAID the > other week. No problems so far, and the reshape is at 46%. > > It's hard to be positive that the barrier deactivation is responsible > yet though - while the last few lockups have only been 1-16 hours > apart, I believe the first two had at least 2 or 3 days between them. > I'll keep the array busy to enhance the chances of a lockup though - > each one so far has been during a reshape or a large batch of writing > to the array's partition. If I make it another couple days (meaning > time for this reshape to complete, another drive to be emptied onto > the array, and another reshape at least started) I'll be pretty > confident the problem has been identified. Thanks for the update. > > Assuming the barrier is the culprit (and I'm pretty sure you're right) > what are the consequences of just leaving it off? I gather the idea of > the barrier is to prevent journal corruption in the event of a power > failure or other sudden shutdown, which seems pretty important, but it > also doesn't seem like it was enabled by default in ext3/4 until 2008, > which makes it seem less critical. Correct. Without the barriers the chance of corruption during powerfail is higher. I don't really know how much higher, it depends a lot on the filesystem design and the particular implementation. I think ext4 tends to be fairly safe - after all some devices don't support barriers and it has to do best-effort on those too. > > Even if the ultimate solution for me is to just leave it disabled I'm > happy to keep trying patches if you want to get it properly fixed in > md. We may have to come up with an alternate way to work the array > hard enough to trigger the lockups though - my last 1.5TB drive is > what's being merged in now. After that completes I only have one more > pair of 750GBs (that will have to be shoehorned in using RAID0 again). > I do have a single 750GB left over, so I'll probably find a mate for > it and get it added to. After that we're maxed out on hardware for a > while. > > Mike I'll stare at the code a bit more and see if anything jumps out at me. Thanks, NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html