Re: Raid5 resize "testing opportunity"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

I'm currently running an active raid5 array of 12 x 300GB SATA devices.
During the last couple of months I have grown my raid two times (from 4
to 8 to 12). I was using a 2.6.16-rc1 kernel with the (at that time)
latest md-patch.

I'm happy to say that both times the growing procedure completed
successfully!

This is how I did:

At first I had 4 devices ( /dev/sd{a,b,c,d} ) running in an active raid5
array (chunk-size 256). When I bought 4 more I thought I’d try to grow
them instead of running another array. I assembled my array with 4
drives and made sure that the array started without problems (cat:ed
/proc/mdstat). After that I cfdisk:ed the 4 new devices to one huge
partition with the type FD (Linux raid autodetect) and added them as
spares with the command:

# mdadm --add /dev/md0 /dev/sd{e,f,g,h}1

After that I checked the /proc/mdstat to confirm that they had been
successfully added and then executed the grow command:

# mdadm -Gv /dev/md0 -n8

which started the whole growing procedure. After that I waited (it took
about 6 times rebuilding from 4 to 8 and almost 11 hours from 8 to 12).


The following information might not belong in the raid-list but I
thought it might be useful someone:
---------------------------------------------------------------------
The raid is encrypted with LUKS aes-cbc-essiv:sha256 and has an ext3
filesystem formatted with '-T Largefile', -m0 and '-R stride=64'. After
I successfully had grown the raid5 array I managed to resize the LUKS
and the ext3 partition with the following commands:

(After decrypting the raid using standard luksOpen procedure)

# cryptsetup resize cmd0
(no I didn't forget the <size> information)

# resize2fs -p /dev/mapper/cmd0
seemed to do the trick with the ext3 filesystem.
---------------------------------------------------------------------

This is how I did it both times and I must say, even though it was scary
as hell growing a raid of 2.1TB with need-to-have data, it was really
interesting and boy am I glad it worked! =)

I just thought I’d tribute to the raid-list with my grow-story. It can
be nice to hear of those who succeed too and not only when people have
accidents. =)

Thanks for a great work with the growing code!

Best regards
Per Lindstrand, Sweden

Neil Brown wrote:
> On Thursday May 18, patrik@xxxxxxxxxxx wrote:
>> Hi Neil,
>>
>> The raid5 reshape seems to have gone smoothly (nice job!), though it
>> took 11 hours! Are there any pieces of info you would like about the array?
> 
> Excellent!
> 
> No, no other information would be useful.  
> This is the first real-life example that I know of of adding 2 devices
> at once.  That should be no more difficult, but it is good to know
> that it works in fact as well as in theory.
> 
> Thanks,
> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux