> I have a 14 drive RAID5 array with 1 spare. Very brave! > Each drive is a 2TB SSD. One of the drives failed. I replaced > it, and while it was rebuilding, one of the original drives > experienced some read errors and seems to have been marked > bad. I have since cloned that drive (first using dd and the > nddrescue), and it clones without any read errors. So one drive is mostly missing and one drive (the cloned one) is behind on event count. > But now when I run the 'mdadm --assemble --scan' command, I get: > mdadm: failed to add /dev/sdi to /dev/md/0: Invalid argument > mdadm: /dev/md/0 assembled from 12 drives and 1 spare - not enough to > start the array while not clean - consider --force > mdadm: No arrays foudn in config file or automatically The MD RAID wiki has a similar suggestion: https://raid.wiki.kernel.org/index.php/Assemble_Run "The problem with replacing a dying drive with an incomplete ddrescue copy, is that the raid has no way of knowing which blocks failed to copy, and no way of reconstructing them even if it did. In other words, random blocks will return garbage (probably in the form of a block of nulls) in response to a read request. Either way, now forcibly assemble the array using the drives with the highest event count, and the drive that failed most recently, to bring the array up in degraded mode. mdadm --force --assemble /dev/mdN /dev/sd[XYZ]1" Note that the suggestion does not use '--scan'. "If you are lucky, the missing writes are unimportant. If you are happy with the health of your drives, now add a new drive to restore redundancy. mdadm /dev/mdN --add /dev/sdW1 and do a filesystem check fsck to try and find the inevitable corruption."