Re: Shrink mdadm RAID5 from 6 disks to 5?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



attaching my last two responses to the mailing list as I forgot to
include it on CC's.

On Sun, May 30, 2010 at 4:51 PM, Keith . <lukano@xxxxxxxxx> wrote:
> and for posterity, adding relevant syslog entries.  The capacity
> change noted towards the end may be the root of the problem, due to
> the steps I took to reach where I am - as mentioned in my previous
> email.
>
> [ 3132.556397] md: raid5 personality registered for level 5
> [ 3132.556399] md: raid4 personality registered for level 4
> [ 3132.556557] raid5: device sdc1 operational as raid disk 0
> [ 3132.556560] raid5: device sdd1 operational as raid disk 5
> [ 3132.556563] raid5: device sde1 operational as raid disk 4
> [ 3132.556566] raid5: device sdf1 operational as raid disk 2
> [ 3132.556568] raid5: device sdb1 operational as raid disk 1
> [ 3132.557254] raid5: allocated 6386kB for md127
> [ 3132.557301] 0: w=1 pa=0 pr=6 m=1 a=2 r=6 op1=0 op2=0
> [ 3132.557305] 5: w=2 pa=0 pr=6 m=1 a=2 r=6 op1=0 op2=0
> [ 3132.557308] 4: w=3 pa=0 pr=6 m=1 a=2 r=6 op1=0 op2=0
> [ 3132.557311] 2: w=4 pa=0 pr=6 m=1 a=2 r=6 op1=0 op2=0
> [ 3132.557315] 1: w=5 pa=0 pr=6 m=1 a=2 r=6 op1=0 op2=0
> [ 3132.557318] raid5: raid level 5 set md127 active with 5 out of 6
> devices, algorithm 2
> [ 3132.557321] RAID5 conf printout:
> [ 3132.557323]  --- rd:6 wd:5
> [ 3132.557326]  disk 0, o:1, dev:sdc1
> [ 3132.557328]  disk 1, o:1, dev:sdb1
> [ 3132.557331]  disk 2, o:1, dev:sdf1
> [ 3132.557333]  disk 4, o:1, dev:sde1
> [ 3132.557336]  disk 5, o:1, dev:sdd1
> [ 3132.557377] md127: detected capacity change from 0 to 7501495992320
> [ 3132.557590]  md127: unknown partition table
> [187074.092745] REISERFS (device dm-0): found reiserfs format "3.6"
> with standard journal
> [187074.092796] REISERFS (device dm-0): using ordered data mode
> [187074.120739] REISERFS (device dm-0): journal params: device dm-0,
> size 8192, journal first block 18, max trans len 1024, max batch 900,
> max commit age 30, max trans age 30
> [187074.121360] REISERFS (device dm-0): checking transaction log (dm-0)
> [187074.322428] REISERFS (device dm-0): Using r5 hash to sort names
> [360791.852592] md127: detected capacity change from 7501495992320 to
> 6001010237440
> [360829.233219]  md127: unknown partition table
>
>
> On Sun, May 30, 2010 at 4:46 PM, Keith . <lukano@xxxxxxxxx> wrote:
>> Sorry for the late followup on this.
>>
>> I am still struggling with the shrink, the following is not giving me
>> any useable results for array-size suggestion from mdadm.
>>
>> mdadm -v --grow /dev/md127 --array-size= does not give me any error or
>> results, nor a syslog entry with the array-size value.
>>
>> I'm wondering if the complications I'm seeing are related to my
>> situation.  I'm going to reiterate it again, as I don't think I was
>> clear originally;
>>
>> - I had a 6 disk RAID6 array (1.5tb disks).
>> - One of the drive failed, and due to my poor luck with 1.5's - I
>> decided to 'shrink it from a degraded RAID6 array (running 5 of 6
>> disks) to a stable and undegraded RAID5.
>> - I was able to convert it from RAID6 to RAID5, and am now left with a
>> degraded RAID5 array (6 disks, 5 active).
>> - The actual available space should not change, as in the end there 4x
>> drives + the one parity for RAID5.
>>
>> Correct?  Am I complicating things somewhere, or forgetting some vital logic?
>>
>> And as requested, /proc/mdstat is as follows;
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md127 : active raid5 sdc1[0] sdd1[5] sde1[4] sdf1[2] sdb1[1]
>> 5860361560 blocks level 5, 64k chunk, algorithm 2 [6/5] [UUU_UU]
>>
>> Thanks again for your help Neil.
>>
>> - Keith
>>
>>
>> On Thu, May 20, 2010 at 4:32 PM, Neil Brown <neilb@xxxxxxx> wrote:
>>> On Thu, 20 May 2010 15:42:23 -0600
>>> "Keith ." <lukano@xxxxxxxxx> wrote:
>>>
>>>> Thanks for the quick reply Neil,
>>>>
>>>> I tried the suggested syntax, excluding the array-size in order to
>>>> have mdadm give me the value - but it did not.  I also tried grabbing
>>>> the current array size from df, but I got an error about being unable
>>>> to change array size in the same operation.
>>>>
>>>> So just to confirm I'm understanding you correctly ;  mdadm -G
>>>> /dev/md127 --raid-devices 5 --backup=/root/backup  - should generate
>>>> an error that provides the array-size value?
>>>
>>> Yes.
>>> Actually, you do probably need to change the size first:
>>>
>>>  mdadm --grow /dev/md127 --array-size=xxxxx
>>> then once you are sure you haven't lost you data, change the number of
>>> devices.
>>>
>>> It might help if you ran the commands with "-v" and reported all the
>>> messages generated, and any kernel log messages.
>>> And maybe
>>>  cat /proc/mdstat
>>>
>>> just to give us some more context..
>>> Maybe you have a write-intent-bitmap attached to the array.  You cannot
>>> reshape an array with one of those attached.
>>>
>>> NeilBron
>>>
>>>
>>>>
>>>> On Thu, May 20, 2010 at 3:34 PM, Neil Brown <neilb@xxxxxxx> wrote:
>>>> > On Thu, 20 May 2010 07:45:23 -0600
>>>> > "Keith ." <lukano@xxxxxxxxx> wrote:
>>>> >
>>>> >> Does mdadm 3.1.x not support shrinking the number of disks in a RAID5 array?
>>>> >>
>>>> >> I have successfully converted my 6x1.5tb RAID6 array to a 6x1.5tb
>>>> >> RAID5 array.  The issue that I now face is that one of those 6 drives
>>>> >> is failed / removed, and as a result - the new RAID5 array is down
>>>> >> it's parity drive.
>>>> >>
>>>> >> I knew I was short a disk when I started the process, but I was under
>>>> >> the assumption that mdadm now supported shrinking of RAID5/6 arrays.
>>>> >> Am I mistaken, or can anyone throw some suggestions at me so I can
>>>> >> give it a try?
>>>> >
>>>> > It should work.
>>>> >
>>>> >  mdadm -G /dev/mdX --raid-devices 5 --array-size=XXXXX
>>>> >  --backup=/root/backup-file
>>>> >
>>>> > If you don't give the 'array-size' value mdadm will tell you what it has to
>>>> > be.  You need to be sure that all your data is already within that space.
>>>> >
>>>> > NeilBrown
>>>> >
>>>
>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux