Re: RAID5 Shrinking array-size nearly killed the system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/12/2011 01:28 PM, Rory Jaffe wrote:
> On Sat, Mar 12, 2011 at 5:58 PM, Phil Turmel <philip@xxxxxxxxxx> wrote:
> 
>> [CC restored]
>>
>> On 03/12/2011 12:37 PM, Rory Jaffe wrote:
>>> This is my plan now--did I get this right? -- thanks --
>>>
>>> shutdown -r now # go to live cd
>>> umount /dev/md0 #just to make sure
>>> e2fsck /dev/md0
>>> resize2fs /dev/md0 3800G #3.2T currently in use
>>> shutdown -r now # go back to main system
>>> mdadm --grow /dev/md0 --array-size 4000000000
>>> mdadm -G -n 4 -x 2 --backup-file=/path/to/file.bak /dev/md0
>>> resize2fs /dev/md0
>>
>> I would do everything in the LiveCD environment, and I would add an fsck
>> after the resize, and again at the end.
>>
>> In the LiveCD, there's a good chance the array will be assembled for you,
>> but as a different number.  That shouldn't cause any problems, but it
>> affects the commands you'll type.  "cat /proc/mdstat" will give you a quick
>> summary of where you stand.
>>
>> I can't comment on the size figures you've chosen, as you haven't shared
>> the output of "mdadm -D /dev/md0" and "mdadm -E" for each of the component
>> devices.
>>
>> Also note that the backup file needed by mdadm cannot be *inside* the array
>> you are resizing.  You *must* have another storage device for it.  I use a
>> thumb drive with my LiveCD for this kind of task.
>>
>> Phil
>>
> Here's the data on array sizes
> sudo mdadm -D /dev/md/0_0
> /dev/md/0_0:
>         Version : 0.90
>   Creation Time : Thu Jan  6 06:13:08 2011
>      Raid Level : raid5
>      Array Size : 9762687680 (9310.42 GiB 9996.99 GB)
>   Used Dev Size : 1952537536 (1862.08 GiB 1999.40 GB)
>    Raid Devices : 6
>   Total Devices : 6
> Preferred Minor : 127
>     Persistence : Superblock is persistent
> 
>     Update Time : Sat Mar 12 17:56:34 2011
>           State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 7e946e9d:b6a3395c:b57e8a13:68af0467
>          Events : 0.72
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        2        0      active sync   /dev/sda2
>        1       8       50        1      active sync   /dev/sdd2
>        2       8       66        2      active sync   /dev/sde2
>        3       8       82        3      active sync   /dev/sdf2
>        4       8       98        4      active sync   /dev/sdg2
>        5       8      114        5      active sync   /dev/sdh2
> 

OK, so your new array size will be 5857612608 (1952537536 * 3) == 5586GB

You can use an initial resize2fs to 5.4T to speed things up, as you don't really need to move items that are currently located between the 3.8T and 5.4T mark.  Then use the exact "mdadm --grow /dev/md0 --array-size=5857612608" afterwards before you fsck it.

If that passes, do the rest.  The final resize2fs should be very quick.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux