Re: raid 5, drives marked as failed. Can I recover?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I spent a night trying out mdadm --assemble on a virtual machine to
see how it attempts to fix a raid where 2 or more drives have been
marked faulty.
I was quite sure that the drives were fine and that they were wrongly
marked as bad.
I think I just have a bad ata controller.

I used --assemble on real machine and it seemed to have detected the raid again.
1 drive was found to be bad and it is recreating it now.
But my data is there and I can open it.
I am going to get some dvd's and back all this up before it dies again!


Regards and thanks for your help!


2009/1/30 David Greaves <david@xxxxxxxxxxxx>:
> Justin Piszcz wrote:
>> Try to assmeble the array with --force.
> hmmmm? not yet...
>
>
>> On Thu, 29 Jan 2009, Tom wrote:
>>
>>> Hello,
>>>
>>> 2 drives have failed on my raid5 setup and I need to recover the data
>>> on the raid.
>>> I am sure that the drives still works or at least one of them still
>>> works.
>>>
>>> How do I recover my drives?
>
> How important is it?
> The more important the data the more you should reduce the risk of a subsequent
> failure.
> If you "don't care" then we just force it back together and cross fingers.
> Otherwise we run tests on all the drives before trying a restore.
> I'd say to run these tests on each drive; as a minimum do the first test on the
> failed drive, more paranoia, more tests and include the non-failed drives (to
> ensure they don't fail during recovery):
> * smartctl -t short
> * smartctl -t long
> * badblocks
>
> What happened? Smoke?
> Are the drives faulty (what does smartctl -a tell you)
> Did the cables just wiggle? Is the controller broken?
> You probably don't know :)
>
> I would obtain replacements for the failed drives and use ddrescue to copy from
> the failed drive to a replacement.
> Then install the good drives and begin recovery.
>
>>> I can't mount the raid no more and I am missing a hard drive when i
>>> run ls /dev/sd?
>>> I have 7 drives on my raid.
> You say you have 7 drives and 2 are failed.
> And yet I see 4 drives, not 5.
>
> Where is sdg?
>
>>> Here is output of /var/log/messages in following link
>>>
>>> http://matx.pastebin.com/m35423452
>
> Jan 29 21:14:11  sda died
> Jan 29 21:14:12  sdb died
>
>>>
>>> also some more information
> Also need:
>  Distro
>  Kernel version
>  Mdadm version
>  mdadm --examine for each available component.
>
> David
>
> --
> "Don't worry, you'll be fine; I saw it work in a cartoon once..."
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux