Ronald Lembcke wrote:
Hi!
I set up a RAID5 array of 4 disks. I initially created a degraded array
and added the fourth disk (sda1) later.
The array is "clean", but when I do
mdadm -S /dev/md0
mdadm --assemble /dev/md0 /dev/sd[abcd]1
it won't start. It always says sda1 is "failed".
When I remove sda1 and add it again everything seems to be fine until I
stop the array.
Below is the output of /proc/mdstat, mdadm -D -Q, mdadm -E and a piece of the
kernel log.
The output of mdadm -E looks strange for /dev/sd[bcd]1, saying "1 failed".
What can I do about this?
How could this happen? I mixed up the syntax when adding the fourth disk and
tried these two commands (at least one didn't yield an error message):
mdadm --manage -a /dev/md0 /dev/sda1
mdadm --manage -a /dev/sda1 /dev/md0
Thanks in advance ...
Roni
ganges:~# cat /proc/mdstat
Personalities : [raid5] [raid4]
md0 : active raid5 sda1[4] sdc1[0] sdb1[2] sdd1[1]
691404864 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
I will just comment that the 0 1 2 4 numbering on the devices is
unusual. When you created this did you do something which made md think
there was another device, failed or missing, which was device[3]? I just
looked at a bunch of my arrays and found no similar examples.
--
bill davidsen <davidsen@xxxxxxx>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html