On Mar 1, 2008, at 22:09, Michael Guntsche wrote:
Just for kicks I created the RAID with a 0.90 superblock and tried the same thing. Lo and behold after stopping the array and starting it again the progress bar showed up and everything started rebuilding where it stopped earlier.
I am sorry, should have looked at it a little bit closer. Always stopped with: mdadm --stop /dev/...Always started with: mdadm --assemble --scan --auto=yes --symlink=no <-- taken from debian's /etc/init.d/mdadm-raid
0.90 superblock:The device gets assembled with [3/4] devices. The not synced one is seen as spare. As soon as there is activity on the RAID the rebuild start from the BEGINNING. Not from where it left off.
1.00 superblock:The devices gets assembled with "mdadm: /dev/md/1 has been started with 4 drives".
md1 : active(auto-read-only) raid5 sda2[0] sdd2[4] sdc2[2] sdb2[1]1464982272 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/3] [UUU_]
But nothing is happening. you cannot hot-remove the one in spare- rebuilding state, since using -f yields a
4 8 50 3 faulty spare rebuilding /dev/sdd2 status. One more test with an 1.0.During rebuild I mark sdd2 as faulty, the status changes to faulty spare. After stopping and starting the raid, it gets created with 3/4 disks and the fourth one (sdd2) is removed.
Calling mdadm --assemble --scan again yields. mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd2... Kind regards, Michael
Attachment:
smime.p7s
Description: S/MIME cryptographic signature