On Thu, Mar 4, 2010 at 3:41 PM, Ken D'Ambrosio <ken@xxxxxxxx> wrote: > On Thu, March 4, 2010 5:21 pm, Michael Evans wrote: >> Try providing the output of; >> for ii in /dev/[sh]d[a-z] ; do parted $ii print ; done > > Mea culpa; I'd said: > > I went through all 24 permutations of > mdadm --assemble /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 > since I wasn't sure if the drive order was significant. All of them > "worked," inasmuch as they created /dev/md0, but in all cases it was > partitionless. > > Which I assumed implied that /dev/sd[a-d]2 was valid on all disks, though, > in hindsight, I wasn't explicit. So: /dev/sd[a-d]2 exists on all drives > as partition id "fd" (Linux raid autodetect). It's /dev/md0 that shows as > a valid, 4.4 TB disk... with no partition. > > -Ken It sounds like you might be interested in this script: http://www.linuxfoundation.org/collaborate/workgroups/linux-raid/raid_recovery It's untested, but it may produce a sequence that shows you valid data. Also if you 'partitioned' the resulting raid device what you most likely did was use it as an LVM physical volume, and then create logical volumes from it. At least that's the way I'd do it. After you have /dev/md0 (or whatever) running try running this; vgscan ; vgdisplay You might see your 'partitions' listed, at which point you can do a read-only fsck, then read-only mount and determine if they are in fact whole, or if they are instead corrupt in that configuration. If you manually let mdadm guess where to put the devices based on the stored metadata then it will probably determine the correct order for you; presuming you haven't already over-written that with invalid data. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html