Re: raid 5, drives marked as failed. Can I recover?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom wrote:
> Hello,
> 
> I spent a night trying out mdadm --assemble on a virtual machine to
> see how it attempts to fix a raid where 2 or more drives have been
> marked faulty.
> I was quite sure that the drives were fine and that they were wrongly
> marked as bad.
> I think I just have a bad ata controller.
Given 2 drives died in 1 second then I'd agree.

> I used --assemble on real machine and it seemed to have detected the raid again.
> 1 drive was found to be bad and it is recreating it now.
> But my data is there and I can open it.
> I am going to get some dvd's and back all this up before it dies again!

OK, that's good :)

A forced assemble will make md assume all the disks are good and that all writes
succeeded. ie all is well.

They probably didn't and it probably isn't. OTOH you probably lost a few hundred
bytes in many many Gb so nothing to panic over.

You should fsck and, ideally, checksum compare your filesystem against a backup.
I would run a read-only fsck before doing anything. Then if you just have light
damage, repair.

David

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux