Re: Issues restoring a degraded array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/11/2023 02.14, Lane Brooks wrote:
I have a 14 drive RAID5 array with 1 spare. Each drive is a 2TB SSD.
One of the drives failed. I replaced it, and while it was rebuilding,

Did this stop the rebuilding?

one of the original drives experienced some read errors and seems to
have been marked bad. I have since cloned that drive (first using dd

What does "marked bad" mean?
What does 'cat /proc/mdstat' show?

and the nddrescue), and it clones without any read errors. I think the
read errors were coming from a faulty SATA cable.
But now when I run the 'mdadm --assemble --scan' command, I get:
mdadm: failed to add /dev/sdi to /dev/md/0: Invalid argument
mdadm: /dev/md/0 assembled from 12 drives and 1 spare - not enough to
start the array while not clean - consider --force
mdadm: No arrays foudn in config file or automatically

The sdi drive is the cloned drive. My googling for the "Invalid
argument" error have come up dry. Both the original and the cloned
drive give the same error.

Check the system log. Also, it is possible that the disk now has a different name so make sure it
really is /dev/sdi by examining the serial number. You can look at /dev/disk/by-id to see what it is called now.

HTH

If I try the --force, I get the same Invalid argument error but also a
'not enough operational devices (2/14 failed).

Any suggestions on how to recover from this situation?

Lane

--
Eyal at Home (eyal@xxxxxxxxxxxxxx)




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux