On Mar 9, 2016, at 2:27 PM, Andreas Klauer <Andreas.Klauer@xxxxxxxxxxxxxx> wrote: > > On Tue, Mar 08, 2016 at 10:20:17PM -0500, Dan Russell wrote: >> The partially-rebuilt drive is sdk, the original “failed” drive is sdag > > Best to leave both out if one has outdated and the other only half > the content… I generally agree, but in my case the filesystem wasn’t mounted (commented out of fstab, movers dropped system, I booted system, RAID failed before I mounted the filesystem), so I’m OK with the risk. > >> However fdisk and mdadm are reporting the array is 17.6TB in size, whereas it should be 66TB (24 3TB HDDs RAID6). > > I reproduced your commands using tmpfs based loop devices and it gives me > the same problem. The RAID size is only 16 TiB. It seems to be hitting a > limit somewhere. > > Your /dev/mapper/sdx are snapshot/overlays, I hope? Yes. I can’t recommend highly enough the overlay_setup approach on the Wiki. > DDF metadata seems to be located at the end of the device, so you could try > your luck with mdadm 1.0 metadata instead; that gives me a RAID of a size > closer to home. This got me closer, but the LVM2 label was missing. When I’d previously assembled the RAID in the container, I noticed algorithm 10; whereas with this approach it was 2. I switched it to 10 and my array is back. fsck (xfs_repair -n, really) says the FS is clean and random poking at files seems to back that up. I have a backup, of course, but doing a disk-to-disk verify/recovery is going to be so much quicker. Thank you so much for your help, Andreas and all the contributors to the “RAID_Recovery” and “Recovering_a_failed_software_RAID” Wiki pages. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html