Re: Recover from crash in RAID6 due to hardware failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Leslie,

thanks for your suggestion! I succeeded to do it, although the path was
a bit longer:

a) remove logical volume (synology creates one, it prevents stopping the
array):

# ll /dev/mapper/
crw-------    1 root     root       10,  59 Sep  5  2020 control
brw-------    1 root     root      253,   0 Jun 10 11:57 vol1-origin

# dmsetup remove vol1-origin

b) stop the array:

# mdadm --stop /dev/md2
mdadm: stopped /dev/md2

c) recreate the array with the original layout:

# mdadm --verbose --create /dev/md2 --chunk=64 --level=6
--raid-devices=5 --metadata=1.2 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
/dev/sde3

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/sda3 appears to be part of a raid array:
    level=raid6 devices=5 ctime=Sat Sep  5 12:46:57 2020
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb3 appears to be part of a raid array:
    level=raid6 devices=5 ctime=Sat Sep  5 12:46:57 2020
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdc3 appears to be part of a raid array:
    level=raid6 devices=5 ctime=Sat Sep  5 12:46:57 2020
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdd3 appears to be part of a raid array:
    level=raid6 devices=5 ctime=Sat Sep  5 12:46:57 2020
mdadm: layout defaults to left-symmetric
mdadm: /dev/sde3 appears to be part of a raid array:
    level=raid6 devices=5 ctime=Sat Sep  5 12:46:57 2020
mdadm: size set to 2925544256K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.

d) checking it:

# cat /proc/mdstat
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md2 : active raid6 sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
      8776632768 blocks super 1.2 level 6, 64k chunk, algorithm 2 [5/5]
[UUUUU]
      [=>...................]  resync =  6.8% (199953972/2925544256)
finish=2440.4min speed=18613K/sec
     
md1 : active raid1 sda2[1] sdb2[2] sdc2[3] sdd2[0] sde2[4]
      2097088 blocks [5/5] [UUUUU]
     
md0 : active raid1 sdc1[3] sdd1[0] sde1[4]
      2490176 blocks [5/3] [U__UU]
     
unused devices: <none>

After that, I fsck'ed and mounted it read-only, and now I'm happy
recovering my data... :-)

Thanks again!

Carlos


Em 14/06/2021 22:36, Leslie Rhorer escreveu:

> Oops!  'Sorry.  That should be:
>
> mdadm -S /dev/md2
> mdadm -C -f -e 1.2 -n 5 -c 64K --level=6 -p left-symmetric /dev/md2
> /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3
>
>
>     You only have five disks, not six.
>



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux