Re: Wierd: Degrading while recovering raid5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/11/2015 05:12 PM, Kyle Logue wrote:
> Good news phil. Under the hypothesis that the new disk that I added
> didn't fully replace my sde I omitted it from my assemble. The array
> went full UUUUU, then I echo'd check > /sys/block/md0/md/sync_action
> 
> Much later it kicked out the faulty disk (previously sdc) and now i
> have a _UUUU.
> 
> So hopefully this is the final question, but should I just evacuate as
> much data as possible immediately? Or try to add another spare and
> rebuild?

So long as you haven't mounted it yet, I suggest you do another forced
assembly to get back to UUUUU, then kick off another check.  When many
UREs are allowed to accumulate, mdadm can hit its read error rate limit
and kick the drive.  If it hasn't been mounted, you can keep doing it
until you get through the entire check.

But, you also had misaligned partitions.  If sdcN is one of them, the
above won't work, and you should get your backups ASAP.  And then make a
new array from scratch.

If you do succeed in completing a check scrub, you can use --replace to
put the array on properly aligned partitions.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux