On 22 April 2011 10:39, Mathias BurÃn <mathias.buren@xxxxxxxxx> wrote: > On 13 April 2011 12:44, John Robinson <john.robinson@xxxxxxxxxxxxxxxx> wrote: >> (Subject line amended by me :-) >> >> On 12/04/2011 17:56, Mathias BurÃn wrote: >> [...] >>> >>> I'm approaching over 6.5TB of data, and with an array this large I'd >>> like to migrate to RAID6 for a bit more safety. I'm just checking if I >>> understand this correctly, this is how to do it: >>> >>> * Add a HDD to the array as a hot spare: >>> mdadm --manage /dev/md0 --add /dev/sdh1 >>> >>> * Migrate the array to RAID6: >>> mdadm --grow /dev/md0 --raid-devices 7 --level 6 >> >> You will need a --backup-file to do this, on another device. Since you are >> keeping the same number of data discs before and after the reshape, the >> backup file will be needed throughout the reshape, so the reshape will take >> perhaps twice as long as a grow or shrink. If your backup-file is on the >> same disc(s) as md0 is (e.g. on another partition or array made up of other >> partitions on the same disc(s)), it will take way longer (gazillions of >> seeks), so I'd recommend a separate drive or if you have one a small SSD for >> the backup file. >> >> Doing the above with --layout=preserve will save you doing the reshape so >> you won't need the backup file, but there will still be an initial sync of >> the Q parity, and the layout will be RAID4-alike with all the Q parity on >> one drive so it's possible its performance will be RAID4-alike too i.e. >> small writes never faster than the parity drive. Having said that, streamed >> writes can still potentially go as fast as your 5 data discs, as per your >> RAID5. In practice, I'd be surprised if it was faster than about twice the >> speed of a single drive (the same as your current RAID5), and as Neil Brown >> notes in his reply, RAID6 doesn't currently have the read-modify-write >> optimisation for small writes so small write performance is liable to be >> even poorer than your RAID5 in either layout. >> >> You will never lose any redundancy in either of the above, but you won't >> gain RAID6 double redundancy until the reshape (or Q-drive sync with >> --layout=preserve) has completed - just the same as if you were replacing a >> dead drive in an existing RAID6. >> >> Hope the above helps! >> >> Cheers, >> >> John. >> >> > > Hi, > > Thanks for the replies. Allright, here we go; > > Â$ mdadm --grow /dev/md0 --bitmap=none > Â$ mdadm --manage /dev/md0 --add /dev/sde1 > Â$ mdadm --grow /dev/md0 --verbose --layout=preserve Â--raid-devices 7 > --level 6 --backup-file=/root/md-raid5-to-raid6-backupfile.bin > mdadm: level of /dev/md0 changed to raid6 > > $ cat /proc/mdstat > > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Fri Apr > 22 10:37:44 2011 > > Personalities : [raid6] [raid5] [raid4] > md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1[1] > Â Â Â9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18 > [7/6] [UUUUUU_] > Â Â Â[>....................] Âreshape = Â0.0% (224768/1950351360) > finish=8358.5min speed=3888K/sec > > unused devices: <none> > > And in dmesg: > > > Â--- level:6 rd:7 wd:6 > Âdisk 0, o:1, dev:sdg1 > Âdisk 1, o:1, dev:sdb1 > Âdisk 2, o:1, dev:sdd1 > Âdisk 3, o:1, dev:sdc1 > Âdisk 4, o:1, dev:sdf1 > Âdisk 5, o:1, dev:sdh1 > RAID conf printout: > Â--- level:6 rd:7 wd:6 > Âdisk 0, o:1, dev:sdg1 > Âdisk 1, o:1, dev:sdb1 > Âdisk 2, o:1, dev:sdd1 > Âdisk 3, o:1, dev:sdc1 > Âdisk 4, o:1, dev:sdf1 > Âdisk 5, o:1, dev:sdh1 > Âdisk 6, o:1, dev:sde1 > md: reshape of RAID array md0 > md: minimum _guaranteed_ Âspeed: 1000 KB/sec/disk. > md: using maximum available idle IO bandwidth (but not more than > 200000 KB/sec) for reshape. > md: using 128k window, over a total of 1950351360 blocks. > > IIRC there's a way to speed up the migration, by using a larger cache > value somewhere, no? > > Thanks, > Mathias > Increasing stripe cache on the md device from 1027 to 32k or 16k didn't make a difference, still around 3800KB/s reshape. Oh well, we'll see if it's still alive in 5.5 days! Cheers, -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html