Re: help requested for mdadm grow error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Finally, the solution:

as mentioned earlier, I reverted the reshape

mdadm --assemble /dev/md0 --force --verbose --update=revert-reshape --invalid-backup /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

This restarted the md auto-read-only. So I had to manually start the md

mdadm --readwrite /dev/md0

which resulted in bad superblock.

mke2fs won´t work here, because it´s limited to 16TB of diskspace, where my md is 36TiB big. Because I´m a bit lazy to google some more, I started gparted at the machine, choosed the md0 und initiated filesystem integrity check.

This took 5 minutes, afterwards I was able to mount my md.

So I unmounted the md again in order to grow it again:

root@nas:~# mdadm --grow --raid-devices=5 /dev/md0 --backup-file=/tmp/bu_neu.bak
mdadm: Need to backup 6144K of critical section..
root@nas:~# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun May 17 00:23:42 2020
        Raid Level : raid5
        Array Size : 35156256768 (33527.62 GiB 36000.01 GB)
     Used Dev Size : 11718752256 (11175.87 GiB 12000.00 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue May 26 02:08:22 2020
             State : clean, reshaping
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Reshape Status : 0% complete
     Delta Devices : 1, (4->5)

              Name : nas:0  (local to host nas)
              UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
            Events : 38631

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
       5       8       81        4      active sync   /dev/sdf1
root@nas:~#

As we see in mdadm details, reshaping is running. Have a look at mdstat:

root@nas:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sda1[0] sdf1[5] sde1[4] sdd1[2] sdc1[1]
      35156256768 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]       [>....................]  reshape =  0.8% (93989376/11718752256) finish=8260.4min speed=23454K/sec
      bitmap: 0/88 pages [0KB], 65536KB chunk

unused devices: <none>

Seems to be a bit slowly right now. I had expected a speed arount 60MByte due to 6G-Sata drives. However.. it´s running again.

I will dig into speed later.

Again: Thanks to everyone who helped me with ideas and / or advice. You guys saved my ass :)




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux