Re: raid5 stuck in degraded, inactive and dirty mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday January 10, cat@xxxxxxxxxx wrote:
> On Wed, Jan 09, 2008 at 07:16:34PM +1100, CaT wrote:
> > > But I suspect that "--assemble --force" would do the right thing.
> > > Without more details, it is hard to say for sure.
> > 
> > I suspect so aswell but throwing caution into the wind erks me wrt this
> > raid array. :)
> 
> Sorry. Not to be a pain but considering the previous email with all the
> examine dumps, etc would the above be the way to go? I just don't want
> to have missed something and bugger the array up totally.

Yes, definitely.

The superblocks look perfectly normal for a single drive failure
followed by a crash.  So "--assemble --force" is the way to go.

Technically you could have some data corruption if a write was under
way at the time of the crash.  In that case the parity block of that
stripe could be wrong, so the recovered data for the missing device
could be wrong.
This is why you are required to use "--force" - to confirm that you
are aware that there could be a problem.

It would be worth running "fsck" just to be sure that nothing critical
has been corrupted.  Also if you have a recent backup, I wouldn't
recycle it until I was fairly sure that all your data was really safe.

But in my experience the chance of actual data corruption in this
situation is fairly low.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux