Re: help requested for mdadm grow error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/05/2020 22:22, Thomas Grawert wrote:
I don't think I've got an mdadm.conf ... and everything to me looks okay but just not working.

Next step - how far has the reshape got? I *think* you might get that from "cat /proc/mdstat". Can we have that please ... I'm *hoping* it says the reshape is at 0%.

Cheers,
Wol


root@nas:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sda1[0] sdf1[5] sde1[4] sdd1[2] sdc1[1]
       58593761280 blocks super 1.2

unused devices: <none>
root@nas:~#

nothing... the reshaping run about 5min. before power loss.


Just done a search, and I've found this in a previous thread ...

! # mdadm --assemble /dev/md0 --force --verbose --invalid-backup
! /dev/sda1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdc1
! This command resulted in the following message:

! mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument

! The syslog contained the following line:
! md/raid:md0: reshape_position too early for auto-recovery - aborting.

! That led me to the solution to revert the grow command:
! # mdadm --assemble /dev/md0 --force --verbose --update=revert-reshape ! --invalid-backup /dev/sda1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdc1

Okay, so we need to grep dmesg looking for a message like that above about reshaping.

So let's grep for "md/raid" and see what we get ...

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux