On Wed, 24 Apr 2013 14:29:39 +0200 (CEST) Roy Sigurd Karlsbakk <roy@xxxxxxxxxxxxx> wrote: > > Sorry, that should be --name, not --path. > > roy@raidtest:~$ cat /proc/mdstat > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] > md0 : active raid5 vdc[1] vdb[0] vdd[3] > 2096000 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] > > md1 : active raid5 vdf[1] vdg[3] vde[0] > 2096000 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] > > unused devices: <none> > roy@raidtest:~$ udevadm info --query=property --name=/dev/md0 ..... > ID_FS_TYPE=linux_raid_member .... > roy@raidtest:~$ udevadm info --query=property --name=/dev/md1 ... > ID_FS_TYPE=linux_raid_member .... So that all looks good. > > Can you put your initrd (or initramfs or whatever Ubuntu calls it) > > somewhere > > that I can down load it? Or just email it to me, it's probably only > > about 10Meg. > > Here are the ones for 12.04 x86 and 13.04 amd64. The same happen on both > > 12.04 http://karlsbakk.net/tmp/initrd.img-3.2.0-40-generic-pae > 13.04 http://karlsbakk.net/tmp/initrd.img-3.8.0-19-generic > Strangely etc/mdadm/mdadm.conf in the 12.04 image lists the 2 RAID5s and the RAID0, but the file in 13.04 only lists to 2 RAID5s. I don't think that makes a difference, but it is strange. scripts/local-premount/mdadm contains "wait_for_udev" so that should be enough... Wait a bit. I just noticed that 64-md-raid.rules only runs /sbin/mdadm --incremental $tempnode on ACTION=="add". The RAID5 arrays aren't ready immediately and you need to catch ACTION=="change" Yes, that's horrible and inconsistent but that is life. It would be worth adding an extra line: ACTION=="change", RUN+="/sbin/mdadm --incremental $tempnode" I'm not sure how to do that. Maybe just modify the file in the root filesystem and run mkinitrd or mkinitramfs or whatever the command is. Though if that is the problem, then I cannot see what just setting a rootdelay would help. NeilBrown
Attachment:
signature.asc
Description: PGP signature