Re: raid5 reshape is stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 May 2015 06:48:23 -0400 (EDT) Xiao Ni <xni@xxxxxxxxxx> wrote:


> > >    
> > >    In the function continue_via_systemd the parent find pid is bigger than
> > >    0 and
> > > status is 0. So it return 1. So it have no opportunity to call
> > > child_monitor.
> >
> > If continue_via_systemd succeeded, that implies that
> >   systemctl start mdadm-grow-continue@mdXXX.service
> >
> > succeeded.  So
> >    mdadm --grow --continue /dev/mdXXX
> >
> > was run, so that mdadm should call 'child_monitor' and update sync_max when
> > appropriate.  Can you check if it does?
> 
> The service is not running.
> 
> [root@intel-waimeabay-hedt-01 create_assemble]# systemctl start mdadm-grow-continue@md0.service
> [root@intel-waimeabay-hedt-01 create_assemble]# echo $?
> 0
> [root@intel-waimeabay-hedt-01 create_assemble]# systemctl status mdadm-grow-continue@md0.service
> mdadm-grow-continue@md0.service - Manage MD Reshape on /dev/md0
>    Loaded: loaded (/usr/lib/systemd/system/mdadm-grow-continue@.service; static)
>    Active: failed (Result: exit-code) since Tue 2015-05-26 05:33:59 EDT; 21s ago
>   Process: 5374 ExecStart=/usr/sbin/mdadm --grow --continue /dev/%I (code=exited, status=1/FAILURE)
>  Main PID: 5374 (code=exited, status=1/FAILURE)
> 
> May 26 05:33:59 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: Started Manage MD Reshape on /dev/md0.
> May 26 05:33:59 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: mdadm-grow-continue@md0.service: main process exited, ...URE
> May 26 05:33:59 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: Unit mdadm-grow-continue@md0.service entered failed state.
> Hint: Some lines were ellipsized, use -l to show in full.

Hmm.. I wonder why systemctl isn't reporting the error message from mdadm.


> 
> [root@intel-waimeabay-hedt-01 create_assemble]# mdadm --grow --continue /dev/md0 --backup-file=tmp0
> mdadm: Need to backup 6144K of critical section..
> 
> Now the reshape start.
> 
> Try modify the service file :
> ExecStart=/usr/sbin/mdadm --grow --continue /dev/%I --backup-file=/root/tmp0
> 
> It doesn't work too.

I tried that change and it make it work.

> 
> [root@intel-waimeabay-hedt-01 ~]# systemctl daemon-reload
> [root@intel-waimeabay-hedt-01 ~]# systemctl start mdadm-grow-continue@md0.service
> [root@intel-waimeabay-hedt-01 ~]# systemctl status mdadm-grow-continue@md0.service
> mdadm-grow-continue@md0.service - Manage MD Reshape on /dev/md0
>    Loaded: loaded (/usr/lib/systemd/system/mdadm-grow-continue@.service; static)
>    Active: failed (Result: exit-code) since Tue 2015-05-26 05:50:22 EDT; 10s ago
>   Process: 6475 ExecStart=/usr/sbin/mdadm --grow --continue /dev/%I --backup-file=/root/tmp0 (code=exited, status=1/FAILURE)
>  Main PID: 6475 (code=exited, status=1/FAILURE)
> 
> May 26 05:50:22 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: Started Manage MD Reshape on /dev/md0.
> May 26 05:50:22 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: mdadm-grow-continue@md0.service: main process exited, ...URE
> May 26 05:50:22 intel-waimeabay-hedt-01.lab.eng.rdu.redhat.com systemd[1]: Unit mdadm-grow-continue@md0.service entered failed state.
> Hint: Some lines were ellipsized, use -l to show in full.
> 
> 
>   
> >
> >
> > >
> > >
> > >    And if it want to set sync_max to 0 until the backup has been taken. Why
> > >    does not
> > > set sync_max to 0 directly, but use the value reshape_progress? There is a
> > > little confused.
> >
> > When reshaping an array to a different array of the same size, such as a
> > 4-driver RAID5 to a 5-driver RAID6, then mdadm needs to backup, one piece at
> > a time, the entire array (unless it can change data_offset, which is a
> > relatively new ability).
> >
> > If you stop an array when it is in the middle of such a reshape, and then
> > reassemble the array, the backup process need to recommence where it left
> > off.
> > So it tells the kernel that the reshape can progress as far as where it was
> > up to before.  So 'sync_max' is set based on the value of 'reshape_progress'.
> > (This will happen almost instantly).
> >
> > Then the background mdadm (or the mdadm started by systemd) will backup the
> > next few stripes, update sync_max, wait for those stripes to be reshaped,
> > then
> > discard the old backup, create a new one of the few stripes after that, and
> > continue.
> >
> > Does that make it a little clearer?
> 
> This is a big dinner for me. I need digest this for a while. Thanks very much
> for this. What's the "backup process"?
> 
> Could you explain backup in detail. I read the man about backup file.
> 
> When  relocating the first few stripes on a RAID5 or RAID6, it is not possible to keep the data on disk completely
> consistent and crash-proof.  To provide the required safety, mdadm disables writes to the array while this "critical  
> section"  is reshaped, and takes a backup of the data that is in that section.  
> 
> What's the reason about data consistent when relocate data?

If you are reshaping a RAID5 from 3 drives to 4 drives, then the first stripe
will start out as:

   D0  D1   P   -

and you want to change it to

   D0  D1   D2  P

If the system crashes while that is happening, you won't know if either or
both of D2 and P were written, but it is fairly safe just to assume they
weren't and recalculate the parity.
However the second stripe will initially be:

   P  D2  D3 

and you want to change it to

   P  D3  D4  D5

If you crash in the middle of doing that you cannot know which block is D3
- if either.  D4 might have been written, and D3 not yet written.  So D3 is
lost.  

So mdadm takes a copy of a whole stripe, allows the kernel to reshape that
one stripe, updates the metadata to record that the stripe has been fully
reshaped, and then discards the backup.
So if you crash in the middle of reshaping the second stripe above, mdadm
will restore it from the backup.

The backup can be stored in a separate file, or in a device which is being
added to the array.


The reason why "mdadm --grow --continue" doesn't work unless you add the
"--backup=...." is because it doesn't find the "device  being added" - it
looks for a spare, but there aren't any spares any more.   That should be
easy enough to fix.

Thanks,
NeilBrown

> 
> >
> > And in response to your other email:
> > >     Does it should return 1 when pid > 0 and status is not zero?
> >
> > No.  continue_via_systemd should return 1 precisely when the 'systemctl'
> > command was successfully run.  So 'status' must be zero.
> >
> >
> 
> I got this. So reshape_array should return when continue_via_systemd return 1. Then the
> reshape is going on when run the command mdadm --grow --continue. Now the child_monitor
> is called and sync_max is set to max.
> 
> Best Regards
> Xiao
> 

Attachment: pgp2vvslnNehw.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux