Sean Hildebrand wrote: > I had a batch of disks go bad in my array, and have swapped in new disks. > > My array is a five disk RAID5, each 750GB. Currently I have four disks > operational within the array, so the array is functionally a RAID0. > Rebuilds have gone fine, except for the latest disk, which I've tried > four times. > > At 74% into the rebuild, mdadm drops /dev/sdd1 (The spare being > synced) and /dev/sda1 (A synced disk active in the array.) due to a > read error on /dev/sda1. Checking smartctl, there have been 43 read > errors on the disk, and they occur in groups. You have 2 faulty drives. Pounding on them will only make things worse. Get 2 new drives and use ddrescue to copy /dev/sda to a new drive and replace /dev/sda. Then add your second new drive. > The array contents have been modifed since the removal of the older > disks - So only the four currently-operational disks are synced. > Fscking the array also has issues past the halfway mark - Namely, when > it gets to a certain point, /dev/sda1 is dropped from the array and > fsck begins spitting out inode read errors. Well, once sda is gone you're reading garbage if the array even stays up. > Are there any safe ways to remedy my problem? Resizing the array from > five disks to four and then removing /dev/sda1 is impossible, as for > the array to be resized, error free reads of /dev/sda1 would be > necessary, no? It depends how well ddrescue does at reading /dev/sda. The sooner you do it the more chance you have. David -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html