Hello, I have the following problem with one of my raid1-arrays. One of the drives has failed, but it isn't being marked as failed. The reconstruction starts, and at 5% it borks out on the bad sectors. And then it happily begins reconstructing again... When I try to mark the drive as faulty, nothing happens. When I try to hot-remove the drive, it complains about the drive being in use, because it is reconstructing the array. When I try to start the array with only one of the drives, it fails: aeon root # mdadm --assemble /dev/md3 /dev/hde1 mdadm: /dev/md3 assembled from 0 drives - not enough to start it (use --run to insist). aeon root # mdadm --assemble --run /dev/md3 /dev/hde1 mdadm: failed to RUN_ARRAY /dev/md3: Invalid argument And in the syslog, it says May 4 18:26:02 [kernel] md: bind<hde1> May 4 18:26:02 [kernel] raid1: no operational mirrors for md3 After the last command, in /proc/mdstat, it says md3 : inactive hde1[2] 120060736 blocks and I can't do anything with it. When I do aeon root # mdadm --manage --run /dev/md3 mdadm: failed to run array /dev/md3: Invalid argument May 4 18:35:23 [kernel] md: bug in file drivers/md/md.c, line 1512 So, how do I tell the stupid thing to just run it in degraded mode, and be happy with just one drive? Any help would be greatly appreciated. Regards, Michel Wilson. -- Michel Wilson michel@crondor.net PGP key ID 0xD2CB4B7E
Attachment:
pgp00073.pgp
Description: PGP signature