Re: best way to try recovering inactive raid6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> $ uname -a
>> Linux filebear 2.6.32-26-generic #48-Ubuntu SMP Wed Nov 24 09:00:03
>> UTC 2010 i686 GNU/Linux
>>
>> $ mdadm -V
>> mdadm - v3.1.4 - 31st August 2010
>> $ sudo mdadm --assemble --force /dev/md6
>> mdadm: /dev/md6 assembled from 2 drives and 2 spares - not enough to
>> start the array.
>>
>> $ cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md6 : inactive sdc1[0](S) sdd1[6](S) sdh1[5](S) sda1[3](S)
>>     3906236416 blocks
>>
>> unused devices: <none>
>>
>> mdadm was older, but same behavior with 3.1.14. It was a new ubuntu
>> install a few days ago so didn't think of updating mdadm, kernel is
>> fairly recent at least.
>>
> You should send this to the list together with dmesg output from when this
> happens.
>
> Also, I would include the actual drives on the mdadm line, "sudo mdadm
> --assemble --force /dev/md6 /dev/sd[wxyz]".
>
> --
> Mikael Abrahamsson    email: swmike@xxxxxxxxx


I've cloned and re-added /dev/sdb, and it's starting.., but if
/dev/sdb was stale, and the UU_U_ pattern seems to indicate its
resyncing  /dev/sdb and /dev/sdh - why would it ever need /dev/sdb?

$ mdadm --assemble --force /dev/md6 /dev/sd[abcdh]1
mdadm: forcing event count in /dev/sdb1(1) from 1290382 upto 1290388
mdadm: clearing FAULTY flag for device 1 in /dev/md6 for /dev/sdb1
mdadm: /dev/md6 has been started with 3 drives (out of 5) and 2 spares.
root@filebear:/home/bjorn# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdc1[0] sdd1[6] sdh1[5] sda1[3] sdb1[1]
      2929677312 blocks level 6, 256k chunk, algorithm 2 [5/3] [UU_U_]
      [>....................]  recovery =  0.0% (841000/976559104)
finish=309.3min speed=52562K/sec

unused devices: <none>

Oh well, I'm just glad this was md based and possible to resurrect at
all! I've mounted it read-only and tested quite a few large files, and
they're intact.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux