RE: RAID 10 with 2 failed drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>On 21/09/19 10:19, Liviu Petcu wrote:
>> Yes. Only one of the 2 disks reported by mdadm as failed, is broken. I
>> almost finished making images of all the discs, and for the second
"failed"
>> disc ddrescue reported error-free copying. I intend to use the images to
>> recreate the array. I haven't done this before, but I hope I can handle
>> it...

>Could be that failure that knocked the other drive out of the array too.
>Dunno why it should happen with SATA, they're supposedly independent,
>but certainly with the old PATA disks in pairs, a problem with one drive
>could affect the other.

Hello,
You were right Wol. Only one of the disks was damaged. I reinstalled the 5
drive plus a new one and started the system. I copied the partition table
from one drive to the new drive and then added the partitions to the 2
arrays. The recovery has started. It  seems to be almost all right. On raid
10 array md1 is a Xen Storage, and some VMs that have XFS file system,
booted up but report errors like "XFS (dm-2): Internal error
XFS_WANT_CORRUPTED_GOTO". But this is probably the subject of another
discussion...
Thank you all for your support and advices.

Cheers,
Liviu





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux