Re: help requested for mdadm grow error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




So step 1, revert the reshape. Step 2, get the array back running. Step 3, start the reshape again.

root@nas:~# mdadm --assemble /dev/md0 --force --verbose --update=revert-reshape --invalid-backup /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 4.
mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sdf1
mdadm: Marking array /dev/md0 as 'clean'
mdadm: added /dev/sdc1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdf1 to /dev/md0 as 4
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 4 drives and 1 spare.

=======================

WOW!

ok, let´s check the status again:

root@nas:~# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun May 17 00:23:42 2020
        Raid Level : raid5
        Array Size : 35156256768 (33527.62 GiB 36000.01 GB)
     Used Dev Size : 11718752256 (11175.87 GiB 12000.00 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon May 25 16:05:38 2020
             State : clean, resyncing (PENDING)
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : nas:0  (local to host nas)
              UUID : d7d800b3:d203ff93:9cc2149a:804a1b97
            Events : 38602

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1

       5       8       81        -      spare   /dev/sdf1

===================================================

root@nas:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sda1[0] sdf1[5](S) sde1[4] sdd1[2] sdc1[1]       35156256768 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
        resync=PENDING
      bitmap: 0/88 pages [0KB], 65536KB chunk

unused devices: <none>

==================================================

root@nas:~# mount /dev/md0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
root@nas:~#


seems the filesystem got a hit. how to proceed now?




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux