On 23/09/19 19:02, Liviu Petcu wrote: >> On 21/09/19 10:19, Liviu Petcu wrote: >>> Yes. Only one of the 2 disks reported by mdadm as failed, is broken. I >>> almost finished making images of all the discs, and for the second > "failed" >>> disc ddrescue reported error-free copying. I intend to use the images to >>> recreate the array. I haven't done this before, but I hope I can handle >>> it... > >> Could be that failure that knocked the other drive out of the array too. >> Dunno why it should happen with SATA, they're supposedly independent, >> but certainly with the old PATA disks in pairs, a problem with one drive >> could affect the other. > > Hello, > You were right Wol. Only one of the disks was damaged. I reinstalled the 5 > drive plus a new one and started the system. I copied the partition table > from one drive to the new drive and then added the partitions to the 2 > arrays. The recovery has started. It seems to be almost all right. On raid > 10 array md1 is a Xen Storage, and some VMs that have XFS file system, > booted up but report errors like "XFS (dm-2): Internal error > XFS_WANT_CORRUPTED_GOTO". But this is probably the subject of another > discussion... > Thank you all for your support and advices. > Couple of points. Seeing as you're running mirrors, read up on dm-integrity. It's good for any raid but especially mirrors - it will check-sum writes so if data gets corrupted (it does happen) it will return a read error rather than duff data. And secondly, I think it's okay with MBR but definitely don't use dd to copy a GPT partition table. GPT has uuids, and the same uuid on more than one disk or partition will lead to chaos ... :-) Cheers, Wol