Re: raid5 reshape is stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 15 May 2015 03:00:24 -0400 (EDT) Xiao Ni <xni@xxxxxxxxxx> wrote:

> Hi Neil
> 
>    I encounter the problem when I reshape a 4-disks raid5 to raid5. It just can
> appear with loop devices.
> 
>    The steps are:
> 
> [root@dhcp-12-158 mdadm-3.3.2]# mdadm -CR /dev/md0 -l5 -n5 /dev/loop[0-4] --assume-clean
> mdadm: /dev/loop0 appears to be part of a raid array:
>        level=raid5 devices=6 ctime=Fri May 15 13:47:17 2015
> mdadm: /dev/loop1 appears to be part of a raid array:
>        level=raid5 devices=6 ctime=Fri May 15 13:47:17 2015
> mdadm: /dev/loop2 appears to be part of a raid array:
>        level=raid5 devices=6 ctime=Fri May 15 13:47:17 2015
> mdadm: /dev/loop3 appears to be part of a raid array:
>        level=raid5 devices=6 ctime=Fri May 15 13:47:17 2015
> mdadm: /dev/loop4 appears to be part of a raid array:
>        level=raid5 devices=6 ctime=Fri May 15 13:47:17 2015
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md0 started.
> [root@dhcp-12-158 mdadm-3.3.2]# mdadm /dev/md0 -a /dev/loop5
> mdadm: added /dev/loop5
> [root@dhcp-12-158 mdadm-3.3.2]# mdadm --grow /dev/md0 --raid-devices 6
> mdadm: Need to backup 10240K of critical section..
> [root@dhcp-12-158 mdadm-3.3.2]# cat /proc/mdstat 
> Personalities : [raid6] [raid5] [raid4] 
> md0 : active raid5 loop5[5] loop4[4] loop3[3] loop2[2] loop1[1] loop0[0]
>       8187904 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
>       [>....................]  reshape =  0.0% (0/2046976) finish=6396.8min speed=0K/sec
>       
> unused devices: <none>
> 
>    It because the sync_max is set to 0 when run the command --grow
> 
> [root@dhcp-12-158 mdadm-3.3.2]# cd /sys/block/md0/md/
> [root@dhcp-12-158 md]# cat sync_max 
> 0
> 
>    I tried reproduce with normal sata devices. The progress of reshape is no problem. Then
> I checked the Grow.c. If I use sata devices, in function reshape_array, the return value
> of set_new_data_offset is 0. But if I used loop devices, it return 1. Then it call the function
> start_reshape. 

set_new_data_offset returns '0' if there is room on the devices to reduce the
data offset so that the reshape starts writing to unused space on the array.
This removes the need for a backup file, or the use of a spare device to
store a temporary backup.
It returns '1' if there was no room for relocating the data_offset.

So on your sata devices (which are presumably larger than your loop devices)
there was room.  On your loop devices there was not.


> 
>    In the function start_reshape it set the sync_max to reshape_progress. But in sysfs_read it
> doesn't read reshape_progress. So it's 0 and the sync_max is set to 0. Why it need to set the
> sync_max at this? I'm not sure about this. 

sync_max is set to 0 so that the reshape does not start until the backup has
been taken.
Once the backup is taken, child_monitor() should set sync_max to "max".

Can you  check if that is happening?

Thanks,
NeilBrown


> 
>    I tried to fix this but I'm not sure whether it's the right way. I'll send the patches in 
> other mails.
> 
> Best Regards
> Xiao
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Attachment: pgpYvGvOML1jd.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux