Re: Raid5 2 drive failure (and my spare failed too)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 5, 2019 at 6:33 PM Ryan Heath <gaauuool@xxxxxxxxx> wrote:
>
> So this is the approximately response I expected however I do want to
> pose a few additional queries:
>
> So if I read the output correctly it appears that /dev/sdb is the most
> recent drive to fail it does appear that it is only slightly out of sync
> with the rest four drives that are currently functioning, what is it
> exactly that keeps things from being forced back online?
>
> If as I suspect /dev/sdb was the last drive to fail... I have looked at
> it via smartctl and the drive still appears to be functional so wouldn't
> recreating be an option? I think this is the area which I was suspecting
> I might need guidance.

Yes, recreating is an option. But you need to be careful. Please consider
Andreas Klauer's suggestion on overlays:

Use overlays for experiments:

https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

Overlays require working drives, so if your drives have partial failure,
ddrescue to new drives first!



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux