I see the problem now. And John Robinson was nearly there. The problem is that after assembling the container /dev/md/imsm, mdadm needs to assemble the RAID1, but doesn't find the container /dev/md/imsm to assemble it from. That is because of the DEVICE partitions line. A container is not a partition - it does not appear in /proc/partitions. You need DEVICE partitions containers which is the default if you don't have a DEVICE line (and I didn't have a device line in my testing). I think all the "wrong uuid" messages were because the device was busy (and so it didn't read a uuid), probably because you didn't "mdadm -Ss" first. So just remove the "DEVICE partitions" line, or add " containers" to it, and all should be happy. NeilBrown On Mon, 22 Nov 2010 13:07:10 -0500 Mike Viau <viaum@xxxxxxxxxxxxxxx> wrote: > > > On Thu, 18 Nov 2010 16:38:49 +1100 <neilb@xxxxxxx> wrote: > > > > On Thu, 18 Nov 2010 14:17:18 +1100 wrote: > > > > > > > > > > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote: > > > > > > > ./mdadm -Ss > > > > > > > > > > > > > > mdadm: stopped /dev/md127 > > > > > > > > > > > > > > > > > > > > > ./mdadm -Asvvv > > > > > > > > > > > > > > mdadm: looking for devices for further assembly > > > > > > > mdadm: no RAID superblock on /dev/dm-3 > > > > > > > mdadm: /dev/dm-3 has wrong uuid. > > > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383 > > > > > > > Segmentation fault > > > > > > > > > > > > Try this patch instead please. > > > > > > > > > > Applied new patch and got: > > > > > > > > > > ./mdadm -Ss > > > > > > > > > > mdadm: stopped /dev/md127 > > > > > > > > > > > > > > > ./mdadm -Asvvv > > > > > mdadm: looking for devices for further assembly > > > > > mdadm: no RAID superblock on /dev/dm-3 > > > > > mdadm: /dev/dm-3 has wrong uuid. > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383 > > > > > tst=0x10dd010 sb=(nil) > > > > > Segmentation fault > > > > > > > > Sorry... I guess I should have tested it myself.. > > > > > > > > The > > > > if (tst) { > > > > > > > > Should be > > > > > > > > if (tst && content) { > > > > > > > > > > Apply update and got: > > > > > > mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1. > > > mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1. > > > mdadm: added /dev/sda to /dev/md/imsm0 as -1 > > > mdadm: added /dev/sdb to /dev/md/imsm0 as -1 > > > mdadm: Container /dev/md/imsm0 has been assembled with 2 drives > > > mdadm: looking for devices for /dev/md/OneTB-RAID1-PV > > > > So just to clarify. > > > > With the Debian mdadm, which is 3.1.4, if you > > > > mdadm -Ss > > mdadm -Asvv > > > > it says (among other things) that /dev/sda has wrong uuid. > > and doesn't start the array. > > Actually both compiled and Debian do not start the array. Or atleast create the /dev/md/OneTB-RAID1-PV device when running mdadm -I /dev/md/imsm0 does. > > You are right about seeing a message on /dev/sda about having a wrong uuid somewhere though. I went back to take a look at my output from the Debian mailing list to see that the mdadm did change slightly from this thread has begun. > > The old output was copied verbatim on http://lists.debian.org/debian-user/2010/11/msg01234.html and says (among other things) that /dev/sda has wrong uuid. > > The /dev/sd[ab] has wrong uuid messages are missing from the mdadm -Asvv output but.... > > ./mdadm -Ivv /dev/md/imsm0 > mdadm: UUID differs from /dev/md/OneTB-RAID1-PV. > mdadm: match found for member 0 > mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices > > > I still have this UUID message when still using the mdadm -I command. > > > I'll attach the output of both the mdadm commands above as they run now on the system, but I noticed, but also that in the same thread link above, with the old output I was inqurying as to both /dev/sda and /dev/sdb (the drives which make up the raid1 array) do not appear to recognized as having a valid container when one is required. > > What is take on GeraldCC (gcsgcatling@xxxxxxxxxxx) assistance about /dev/sd[ab] containing a 8e (for LVM) partition type, rather than the fd type to denote raid autodetect. If this was the magical fix (which I am not saying it can’t be) why is mdadm -I /dev/md/imsm0 able to bring up the array for use as an physical volume for LVM? > > > > > > > But with the mdadm you compiled yourself, which is also 3.1.4, > > if you > > > > mdadm -Ss > > mdadm -Asvv > > > > then it doesn't give that message, and it works. > > Again, actually both compiled and Debian do not start the array. Or atleast > create the /dev/md/OneTB-RAID1-PV device when running mdadm -I > /dev/md/imsm0 does. > > > > > That is very strange. It seems that the Debian mdadm is broken somehow, but > > I'm fairly sure Debian hardly changes anything - they are *very* good at > > getting their changes upstream first. > > > > I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.conf > > do you? If you did and the two were different, the Debian's mdadm would > > behave a bit differently to upstream (they prefer different config files) but > > I very much doubt that is the problem. > > > > There is no /etc/mdadm.conf on the filesystem only /etc/mdadm/mdadm.conf > > > > But I guess if the self-compiled one works (even when you take the patch > > out), then just > > make install > > I wish this was the case... > > > > > and be happy. > > > > NeilBrown > > > > > > > > > > > > > Full output at: http://paste.debian.net/100103/ > > > expires: > > > > > > 2010-11-21 06:07:30 > > Thanks > > -M > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html