Hello RAID mailing list, We recently ran into an issue with a degraded raid array. The drive in question was able to read, but writes were in some sort of failure state due to a lack of re-mappable sectors. --replace wasn't an option with our version of mdadm, so we tried removing the faulty partition and adding a new one (using a partition from another drive already in the array). that failed when a bad block was discovered. We tried re-adding the faulty drive, but it died completely. One of our admins went into the data center, and unfortunately unplugged the wrong drive, then plugged it back in. Oddly enough, the drive letter did not change for that drive partition. All looked okay. A bit later, the new partition we had tried to add earlier was a part of the array again, and there was a syncing process taking place, possibly by setting the "0xfd" disk label for "Linux raid autodetect" earlier on that drive. This partition had a new drive letter, unlike the one that remained in the array. We decided to shut down the machine so we could add in sata cables to add more drives to aid with recovery. However the raid array did not come back. We are unable to start the array with either sdd1 or sdd2 (sdd2 is the one that we tried adding previously) and the --run option. sdd1 can't get added to the array at all. Bitmaps are not shown for the device. sdd2 is marked with (R), which I am told means "replacing", but I don't know more than that. md127 : inactive sde1[0] sdg1[3] sdd2[5](R) 702896904 blocks super 1.2 The number of events on sdd1, which was originally in the array, is less than sde1, sdg1 and sdd2, which all have the same number of events. Is it safe to attempt to re-assemble a degraded array with three members using --force on sde1, sdg1 and sdd2? Thanks for your help, Andrew