Re: raid5 revert-reshape issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/01/19 03:35, Romulo Albuquerque wrote:
> Hi,
> 
> I have a debian 8.2 (jessie) box, with a raid5 array running fine for
> more than 5 years.
> I tried to grew it from 6x2TB disks to 7x2TB, but the reshape got
> stuck due to failures on the new added disk.
> So, I bounced the system and tried to revert the reshape process.
> I stopped the array:  mdadm --stop /dev/md127
> 
> then try to revert the reshape process:
>    mdadm --assemble /dev/md127 --run --force --update=revert-reshape
> /dev/sd[dcfgbe]1
> 
> But it didn't work... I got a message asking for a backup-file that
> was lost after the reboot.

Okay, it's all looking good ...

I say that because the reshape position is 0, so it looks to me like the
reshape never actually started.

What version of mdadm are you running? ("mdadm --version"). What version
of kernel? ("uname -a").

This used to come up a lot - there are a few known bugs that would hang
a reshape like this - what's the betting you're on mdadm 3.3 or 3.4?

I strongly suspect that if you boot from an up-to-date recovery disk you
will be able to run your revert-reshape command no problem. There is
also a no-backup-file option that you might need - it won't do any
damage because the reshape never actually started.

Modern mdadm and kernel won't use a backup anyway, because they stash it
all in an area on the growing array.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux