Re: Raid5 to another raid level??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks that worked fine, i did have to shrink the partition down first
though, but that's no biggie

On 12 October 2011 11:14, NeilBrown <neilb@xxxxxxx> wrote:
> On Wed, 12 Oct 2011 10:25:16 +0100 Michael Busby <michael.a.busby@xxxxxxxxx>
> wrote:
>
>> Thanks, can i just double check the command with you
>>
>> mdadm --create /dev/md0 --chunk=512 --metadata=1.0 --assume-clean
>> --level=5 --raid-devices=4 /dev/sde /dev/sdc /dev/sdd /dev/sdb
>>
>
> Correct.  Of course you have to
>   mdadm --stop /dev/md0
> first, but you knew that.
>
> NeilBrown
>
>
>>
>> On 12 October 2011 05:10, NeilBrown <neilb@xxxxxxx> wrote:
>> > On Mon, 10 Oct 2011 22:47:58 +0100 Michael Busby <michael.a.busby@xxxxxxxxx>
>> > wrote:
>> >
>> >> I have a quick question i remember reading somewhere about not using
>> >> metadata version 0.9 with drives larger than 2tb,
>> >> > at the moment i have the following
>> >> >
>> >> > root@BlueBolt:~# cat /proc/mdstat
>> >> > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
>> >> > md0 : active raid5 sdd[2] sde[0] sdb[3] sdc[1]
>> >> >       5860543488 blocks level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
>> >> >       bitmap: 2/15 pages [8KB], 65536KB chunk
>> >> > unused devices: <none>
>> >> > root@BlueBolt:~# mdadm --detail /dev/md0
>> >> > /dev/md0:
>> >> >         Version : 0.90
>> >> >   Creation Time : Mon Jul  4 15:08:38 2011
>> >> >      Raid Level : raid5
>> >> >      Array Size : 5860543488 (5589.05 GiB 6001.20 GB)
>> >> >   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>> >> >    Raid Devices : 4
>> >> >   Total Devices : 4
>> >> > Preferred Minor : 0
>> >> >     Persistence : Superblock is persistent
>> >> >   Intent Bitmap : Internal
>> >> >     Update Time : Mon Oct 10 22:44:11 2011
>> >> >           State : active
>> >> >  Active Devices : 4
>> >> > Working Devices : 4
>> >> >  Failed Devices : 0
>> >> >   Spare Devices : 0
>> >> >          Layout : left-symmetric
>> >> >      Chunk Size : 512K
>> >> >            UUID : ddab6c38:dee3ead0:95ba4558:1c9a49ed (local to host BlueBolt)
>> >> >          Events : 0.2836102
>> >> >     Number   Major   Minor   RaidDevice State
>> >> >        0       8       64        0      active sync   /dev/sde
>> >> >        1       8       32        1      active sync   /dev/sdc
>> >> >        2       8       48        2      active sync   /dev/sdd
>> >> >        3       8       16        3      active sync   /dev/sdb
>> >> > which as you can see if using 0.90, i am looking at replacing all the 2tb drives with 3tb versions, would i need to update the metadata version? if so how can i go about this?
>> >
>> > With a really recent kernel (3.1) and recent mdadm (also not released yet),
>> > 0.90 can go up to 4TB (it has 32 bits to count kilobytes with).
>> >
>> > Alternately you need to convert to 1.0 metadata.
>> >
>> > Currently the only way to do this is to 'create' the array again.
>> > Be sure to specified the same chunk size, the right metadata, the name level
>> > and number of disks, and the correct disks in the correct order.
>> > An use "--assume-clean".  Then check your data is still consistent.
>> > With --assume-clean and a read-only mount, no data will actually be changed,
>> > only metadata.
>> >
>> > NeilBrown
>> >
>> >
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux