RAID5 -> RAID6 conversion, please help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6 array.
This was my existing array:
----------------------------------------------------------------------------
-----------------------------------
/dev/md0:
  Version : 0.90
  Raid Level : raid5
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 5
 Total Devices : 5
  Persistence : Superblock is persistent
  State : clean
 Active Devices : 5
 Working Devices : 5
  Layout : left-symmetric
  Chunk Size : 512K
  Events : 0.156

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
----------------------------------------------------------------------------
-------------------------------

I did the conversion according to "howtos", so:
$ mdadm -add /dev/md0 /dev/sdd1
then:
$ mdadm --grow /dev/md0 --level=6 --raid-devices=6
--backup-file=/mnt/mdadm-raid5-to-raid6.backup

Instead of starting the reshape process, mdadm responded this:
mdadm: /dev/md0: changed level to 6 (or something like that, i dont remember
the exact words, but it was about changing the level).
mdadm: /dev/md0: Cannot get array details from sysfs

And the array became this:
----------------------------------------------------------------------------
-------------------------------
/dev/md0:
  Raid Level : raid6
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
State : clean, degraded
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Events : 0.170

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
       5       0        0        5      removed
       6       8       49        -      spare   /dev/sdd1
----------------------------------------------------------------------------
-------------------------------

At this point I realized that /dev/sdd previously was a member of another
raid array in an other machine, and however I re-partitioned the disk, I
didn't remove the old superblock. So maybe this was the reason for the mdadm
error. Since the state of /dev/sdd1 was spare, i removed it:

$ mdadm -remove /dev/md0 /dev/sdd1

then cleared remaining superblock
$ mdadm --zero-superblock /dev/sdd1

then added it back to the array:
mdadm --add /dev/md0 /dev/sdd1

and started the grow process again:
$ mdadm --grow /dev/md0 --level=6 --raid-devices=6
--backup-file=/mnt/mdadm-raid5-to-raid6.backup
mdadm: /dev/md0: no change requested

Mdadm stated no change, however, it started to rebuild the array. It's
currently rebuilding:
----------------------------------------------------------------------------
-------------------------------
/dev/md0:
  Version : 0.90
  Raid Level : raid6
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 6
 Total Devices : 6
  Persistence : Superblock is persistent
  State : clean, degraded, recovering
 Active Devices : 5
  Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1
  Layout : left-symmetric-6
 Chunk Size : 512K
 Rebuild Status : 2% complete
  Events : 0.186

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
       6       8       49        5      spare rebuilding   /dev/sdd1
----------------------------------------------------------------------------
-------------------------------
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
      5860548608 blocks level 6, 512k chunk, algorithm 18 [6/5] [UUUUU_]
      [>....................]  recovery =  2.3% (34438272/1465137152)
finish=1074.5min speed=22190K/sec

unused devices: <none>
----------------------------------------------------------------------------
-------------------------------

Mdadm didn't create the backup file, and the process seems too fast to me
for a raid5->raid6 conversion.
Please help me to understand what's happening now.

Cheers,
Peter




--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux