Re: Raid-6 cannot reshape

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/7/2020 12:28 PM, Phil Turmel wrote:
> Hi Allie,
> 
> On 4/7/20 6:25 AM, Alexander Shenkin wrote:
>>
>>
>> On 4/6/2020 9:34 PM, Phil Turmel wrote:
>>> On 4/6/20 12:27 PM, Wols Lists wrote:
>>>> On 06/04/20 17:12, Roger Heflin wrote:
>>>>> When I looked at your detailed files you sent a few days ago, all of
>>>>> the reshapes (on all disks) indicated that they were at position 0, so
>>>>> it kind of appears that the reshape never actually started at all and
>>>>> hung immediately which is probably why it cannot find the critical
>>>>> section, it hung prior to that getting done.   Not entirely sure how
>>>>> to undo a reshape that failed like this.
>>>>
>>>> This seems quite common. Search the archives - it's probably something
>>>> like --assemble --revert-reshape.
>>>
>>> Ah, yes.  I recall cases where mdmon wouldn't start or wouldn't open the
>>> array to start moving the stripes, so the kernel wouldn't advance.
>>> SystemD was one of the culprits, I believe, back then.
>>
>> Thanks all.
>>
>> So, is the following safe to run, and a good idea to try?
>>
>> mdadm --assemble --update=revert-reshape /dev/md127 /dev/sd[a-g]3
> 
> Yes.
> 
>> And if that doesn't work, add a force? >
>> mdadm --assemble --force --update=revert-reshape /dev/md127 /dev/sd[a-g]3
> 
> Yes.
> 
>> And adding --invalid-backup if it complains about backup files?
> 
> Yes.
> 
>> Thanks,
>> Allie
> 
> Phil
> 

Thanks Phil,

The --invalid-backup parameter was necessary to get this up and running.
 It's now up with the 7th disk as a spare.  Shall I run fsck now, or can
I just try to grow again?

proposed grow operation:
> mdadm --grow -raid-devices=7 --backup-file=/dev/usb/grow_md127.bak
/dev/md127
> mdadm --stop /dev/md127
> umount /dev/md127 # not sure if this is necessary
> resize2fs /dev/md127

Thanks,
Allie

assemble operation results:

root@ubuntu-server:/home/ubuntu-server# mdadm --assemble
--invalid-backup --update=revert-reshape /dev/md127 /dev/sd[a-g]3
mdadm: device 12 in /dev/md127 has wrong state in superblock, but
/dev/sdg3 seems ok
mdadm: /dev/md127 has been started with 6 drives and 1 spare.

root@ubuntu-server:/home/ubuntu-server# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md127 : active raid6 sda3[6] sdg3[9](S) sde3[7] sdf3[8] sdd3[5] sdc3[2]
sdb3[4]
      11680755712 blocks super 1.2 level 6, 512k chunk, algorithm 2
[6/6] [UUUUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md126 : active (auto-read-only) raid1 sdf1[8] sde1[7] sdg1[9] sda1[6]
sdd1[5] sdc1[2] sdb1[4]
      1950656 blocks super 1.2 [7/7] [UUUUUUU]

unused devices: <none>




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux