Re: New RAID causing system lockups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 20, 2010 at 10:26 PM, Neil Brown <neilb@xxxxxxx> wrote:
> On Wed, 15 Sep 2010 17:49:44 -0400
> Mike Hartman <mike@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>> >> Hmmm..
>> >>  Can you try mounting with
>> >>    -o barrier=0
>> >>
>> >> just to see if my theory is at all correct?
>> >>
>> >> Thanks,
>> >> NeilBrown
>> >>
>> >
>>
>> Progress report:
>>
>> I made the barrier change shortly after sending my last message (about
>> 40 hours ago). With that in place, I was able to finish emptying one
>> of the non-assimilated drives onto the array, after which I added that
>> drive as a hot spare and started the process to grow the array onto it
>> - the same procedure I've been applying since I created the RAID the
>> other week. No problems so far, and the reshape is at 46%.
>>
>> It's hard to be positive that the barrier deactivation is responsible
>> yet though - while the last few lockups have only been 1-16 hours
>> apart, I believe the first two had at least 2 or 3 days between them.
>> I'll keep the array busy to enhance the chances of a lockup though -
>> each one so far has been during a reshape or a large batch of writing
>> to the array's partition. If I make it another couple days (meaning
>> time for this reshape to complete, another drive to be emptied onto
>> the array, and another reshape at least started) I'll be pretty
>> confident the problem has been identified.
>
> Thanks for the update.
>
>>
>> Assuming the barrier is the culprit (and I'm pretty sure you're right)
>> what are the consequences of just leaving it off? I gather the idea of
>> the barrier is to prevent journal corruption in the event of a power
>> failure or other sudden shutdown, which seems pretty important, but it
>> also doesn't seem like it was enabled by default in ext3/4 until 2008,
>> which makes it seem less critical.
>
> Correct.  Without the barriers the chance of corruption during powerfail is
> higher.  I don't really know how much higher, it depends a lot on the
> filesystem design and the particular implementation.  I think ext4 tends to
> be fairly safe - after all some devices don't support barriers and it has to
> do best-effort on those too.
>
>>
>> Even if the ultimate solution for me is to just leave it disabled I'm
>> happy to keep trying patches if you want to get it properly fixed in
>> md. We may have to come up with an alternate way to work the array
>> hard enough to trigger the lockups though - my last 1.5TB drive is
>> what's being merged in now. After that completes I only have one more
>> pair of 750GBs (that will have to be shoehorned in using RAID0 again).
>> I do have a single 750GB left over, so I'll probably find a mate for
>> it and get it added to. After that we're maxed out on hardware for a
>> while.
>>
>> Mike
>
> I'll stare at the code a bit more and see if anything jumps out at me.
>
> Thanks,
> NeilBrown
>
>

I've just finished my last grow-and-copy with no problems. The only
drive that's not part of the array now is the leftover 750GB, which is
now empty. I haven't experienced any further lockups so your barrier
diagnosis seems to be spot on. I'm planning to just leave that option
turned off, but as I said, I'm happy to test any patches you come up
with. Thanks for all your help.

Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux