Re: RAID5 -> RAID6 conversion, please help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 10 May 2011 20:38:11 -0400 Dylan Distasio <interzone@xxxxxxxxx> wrote:

> Hi Neil-
> 
> Just out of curiosity, how does mdadm decide which layout to use on a
> reshape from RAID5->6.  I converted two of my RAID5s on different
> boxes running the same OS awhile ago, and was not aware of the
> different possibilities.   When I check now, one of them was converted
> with the Q block all on the last disk, and the other appears
> normalized.  I'm relatively confident I ran exactly the same command
> on both to reshape them within a short time of one another.

mdadm first converts the RAID5 to RAID6 in an instant atomic operation which
results in the "-6" layout.  It then starts a restriping process which
converts the layout.

If you end up with a -6 layout then something when wrong starting the
restriping process.

Maybe you used different version of mdadm?  There have probably been bugs in
some versions..

NeilBrown



> 
> Here are the current details of the two arrays:
> 
> dylan@terrordome:~$ sudo mdadm -D /dev/md0
> /dev/md0:
>         Version : 0.90
>   Creation Time : Tue Mar  3 23:41:24 2009
>      Raid Level : raid6
>      Array Size : 5860559616 (5589.07 GiB 6001.21 GB)
>   Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>    Raid Devices : 8
>   Total Devices : 8
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue May 10 20:06:42 2011
>           State : active
>  Active Devices : 8
> Working Devices : 8
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric-6
>      Chunk Size : 64K
> 
>            UUID : 4891e7c1:5d7ec244:a9bd8edb:
> d35467d0 (local to host terrordome)
>          Events : 0.743956
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       33        0      active sync   /dev/sdc1
>        1       8       49        1      active sync   /dev/sdd1
>        2       8       97        2      active sync   /dev/sdg1
>        3       8      113        3      active sync   /dev/sdh1
>        4       8       17        4      active sync   /dev/sdb1
>        5       8       65        5      active sync   /dev/sde1
>        6       8      241        6      active sync   /dev/sdp1
>        7      65       17        7      active sync   /dev/sdr1
> dylan@terrordome:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 10.04.1 LTS
> Release:        10.04
> Codename:       lucid
> 
> 
> dylan@rapture:~$ sudo mdadm -D /dev/md0
> 
> /dev/md0:
>         Version : 0.90
>   Creation Time : Sat Jun  7 02:54:05 2008
>      Raid Level : raid6
>      Array Size : 2194342080 (2092.69 GiB 2247.01 GB)
>   Used Dev Size : 731447360 (697.56 GiB 749.00 GB)
>    Raid Devices : 5
>   Total Devices : 5
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Tue May 10 20:19:13 2011
>           State : clean
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 83b4a7df:1d05f5fd:e368bf24:bd0fce41
>          Events : 0.723556
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       18        0      active sync   /dev/sdb2
>        1       8       34        1      active sync   /dev/sdc2
>        2       8        2        2      active sync   /dev/sda2
>        3       8       66        3      active sync   /dev/sde2
>        4       8       82        4      active sync   /dev/sdf2
> 
> dylan@rapture:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 10.04.1 LTS
> Release:        10.04
> Codename:       lucid
> 
> On Tue, May 10, 2011 at 8:21 PM, NeilBrown <neilb@xxxxxxx> wrote:
> >
> > On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh <netwiz@xxxxxxxxx> wrote:
> >
> > > On 11/05/2011 9:31 AM, NeilBrown wrote:
> > > > When it finished you will have a perfectly functional RAID6 array with full
> > > > redundancy.  It might perform slightly differently to a standard layout -
> > > > I've never performed any measurements to see how differently.
> > > >
> > > > If you want to (after the recovery completes) you could convert to a regular
> > > > RAID6 with
> > > >    mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device
> > > >
> > > > but you probably don't have to.
> > > >
> > >
> > > This makes me wonder. How can one tell if the layout is 'normal' or with
> > > Q blocks on a single device?
> > >
> > > I recently changed my array from RAID5->6. Mine created a backup file
> > > and took just under 40 hours for 4 x 1Tb devices. I assume that this
> > > means that data was reorganised to the standard RAID6 style? The
> > > conversion was done at about 4-6Mb/sec.
> >
> > Probably.
> >
> > What is the 'layout' reported by "mdadm -D"?
> > If it ends -6, then it is a RAID5 layout with the Q block all on the last
> > disk.
> > If not, then it is already normalised.
> >
> > >
> > > Is there any effect on doing a --layout=normalise if the above happened?
> > >
> > Probably not.
> >
> > NeilBrown
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux