hi list,
I read somewhere that it would be better not to rely on the
autodetect-mechanism in the kernel at boot time, but rather to set up
/etc/mdadm.conf accordingly and boot with raid=noautodetect. Well, I
tried that :)
I set up /etc/mdadm.conf for my 2 raid5 arrays:
---- snip ----
# mountpoint: /home/media
ARRAY /dev/md0
level=raid5
UUID=86ed1434:43380717:4abf124e:970d843a
devices=/dev/sda1,/dev/sdb1,/dev/sdd3
# mountpoint: /mnt/raid
ARRAY /dev/md1
level=raid5
UUID=baf59fb5:f4805e7a:91a77644:af3dde17
# devices=/dev/sda2,/dev/sdb2,/dev/sdd2
---- snap ----
and rebooted with raid=noautodetect. It booted fine, but the 3rd disks
from each array (/dev/sdd2 and /dev/sdd3) were removed, so I had 2
degraded raid5 arrays. It was possible to readd them with sth. like:
mdadm /dev/md0 -a /dev/sdd3
(synced and /proc/mdstat showed [UUU])
but after the next reboot, the two partitions were again removed
([UU_])?! This was a reproducible error, I tried it several times with
different /etc/mdadm.conf settings (ARRAY-statement with UUID=,
devices=, UUID+devices, etc.).
I´m now running autodetect again, all raid arrays are working fine, but
can anyone explain this strange behaviour?
(kernel-2.6.16.14, amd64)
thanks,
florian
PS: please cc me, as I´m not subscribed to the list
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html