Michal Soltys wrote:
nterry wrote:
Great - All working. Then I rebooted and was back to square one with
only 3 drives in /dev/md0 and /dev/sdb in /dev/md_d0
So I am still not understanding
where /dev/md_d0 is coming from and although I know how to get things
working after a reboot, clearly this is not a long term solution...
My blind shot is that udev rules of your distro are doing mdadm
--incremental assembly and picking sdb as a part of nonexisting array
from the long ago (leftover after old experimentations ?). Or
something else is doing so.
What does mdadm -Esvv /dev/sdb show ?
Add
DEVICE /dev/sd[bcde]1
on top of your mdadm.conf - it should stop --incremental from picking
up sdb. Assuming that's the cause of the problem.
Also note, that FC9 might be trying to assemble the array during
initramfs stage (assuming it uses one) and having problems there. I've
never used Fedora so hard to tell for me - but definitely peek there,
particulary at udev and mdadm part of things.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
[root@homepc ~]# mdadm -Esvv /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 0.90.02
UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
Creation Time : Thu Dec 15 15:29:36 2005
Raid Level : raid5
Used Dev Size : 245111552 (233.76 GiB 250.99 GB)
Array Size : 245111552 (233.76 GiB 250.99 GB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 0
Update Time : Wed Apr 5 13:43:20 2006
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Checksum : 2bd59790 - correct
Events : 1530654
Layout : left-symmetric
Chunk Size : 128K
Number Major Minor RaidDevice State
this 2 22 0 2 spare
0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
2 2 22 0 2 spare
[root@homepc ~]#
I added the DEVICE /dev/sd[bcde]1 to mdadm.conf and that apepars to have
fixed the problem. 2 reboots and it worked both times.
I also note now that:
[root@homepc ~]# mdadm --examine --scan
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=50e3173e:b5d2bdb6:7db3576b:644409bb
spares=1
[root@homepc ~]#
Frankly I don't know enough about the workings of udev and the boot
process to be able to get into that. However these two files might mean
something to you:
[root@homepc ~]# cat /etc/udev/rules.d/64-md-raid.rules
# do not edit this file, it will be overwritten on update
SUBSYSTEM!="block", GOTO="md_end"
ACTION!="add|change", GOTO="md_end"
# import data from a raid member and activate it
#ENV{ID_FS_TYPE}=="linux_raid_member", IMPORT{program}="/sbin/mdadm
--examine --export $tempnode", RUN+="/sbin/mdadm --incremental
$env{DEVNAME}"
# import data from a raid set
KERNEL!="md*", GOTO="md_end"
ATTR{md/array_state}=="|clear|inactive", GOTO="md_end"
IMPORT{program}="/sbin/mdadm --detail --export $tempnode"
ENV{MD_NAME}=="?*", SYMLINK+="disk/by-id/md-name-$env{MD_NAME}"
ENV{MD_UUID}=="?*", SYMLINK+="disk/by-id/md-uuid-$env{MD_UUID}"
IMPORT{program}="vol_id --export $tempnode"
OPTIONS="link_priority=100"
ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*",
SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*",
SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"
LABEL="md_end"
[root@homepc ~]#
AND...
[root@homepc ~]# cat /etc/udev/rules.d/70-mdadm.rules
# This file causes block devices with Linux RAID (mdadm) signatures to
# automatically cause mdadm to be run.
# See udev(8) for syntax
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
RUN+="/sbin/mdadm -I --auto=yes $root/%k"
[root@homepc ~]#
Thanks for getting me working
Nigel
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html