Re: Converting from Raid 5 to 6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using -f seems to have worked just running e2fsck now

When running a command like mdadm --assemble -- force --verbose
/dev/md0 /dev/sa[abcde] how important it the drive order?


Sent from my iPad

> On 2 Dec 2013, at 05:51 am, NeilBrown <neilb@xxxxxxx> wrote:
>
> On Sat, 30 Nov 2013 22:13:58 +0000 Michael Busby <michael.a.busby@xxxxxxxxx>
> wrote:
>
>> Sorry to bring up a old thread, last night i had a power cut and this
>> morning when the power has come back i have tried to boot the server,
>> but the raid will not assemble on using a live CD i have found that
>> one of the disk is reporting "possibly out of date" is there any way
>> to force this disk back in? the bigger problem i have is that my
>> external caddie has died so i was running a degraded raid 6 but now it
>> is only starting with 4 out of 6 devices. is there anyway to get this
>> back?
>
> It's really hard to know what is possible without precise details.
> Output of "mdadm -E" for each member device is always a good idea.
> If you are having trouble assembling, then output of the assemble command
> with -vv added never goes astray.
> Have you tried adding "-f" to the assemble command.  Often helps and is
> unlikely to hurt.
>
>>
>> i have though about recreating the array using the --assume-clean
>> option but not sure if that's a good idea
>
> Not a good idea except as a very last resort.
>
> NeilBrown
>
>
>>
>> any help will be much appreciated
>>
>>
>>
>>> On 24 October 2011 21:47, Michael Busby <michael.a.busby@xxxxxxxxx> wrote:
>>>
>>> I was sure i added the device before, but when rebooted the system it
>>> has seemed to lose the extra drive and i had already restarted the
>>> grow command with out checking the disk was there, so more than likely
>>> a mistake by me
>>>
>>>
>>>
>>>> On 24 October 2011 21:39, NeilBrown <neilb@xxxxxxx> wrote:
>>>> On Mon, 24 Oct 2011 21:19:22 +0100 Michael Busby <michael.a.busby@xxxxxxxxx>
>>>> wrote:
>>>>
>>>>> Ok thanks, i have 1 small issue, when added the extra disk its been
>>>>> maked as spare, is this normal?
>>>>>
>>>>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>>>>> [raid4] [raid10]
>>>>> md0 : active raid6 sde[0] sdg[6](S) sda[4] sdb[3] sdd[2] sdc[1]
>>>>>      7814055936 blocks super 1.0 level 6, 512k chunk, algorithm 18
>>>>> [6/5] [UUUUU_]
>>>>>      [>....................]  reshape =  3.0% (59244544/1953513984)
>>>>> finish=11122.8min speed=2837K/sec
>>>>
>>>> It looks like the extra drive was added after you started the grow.
>>>>
>>>> So it is still a spare.
>>>> Once the grow finishes you will have a singly-degraded RAID6.
>>>> Then it will immediately start recovering the missing device to the spare.
>>>>
>>>> Did you add the extra drive after starting the grow - or before??
>>>>
>>>> NeilBrown
>>>
>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>> On 24 October 2011 21:14, NeilBrown <neilb@xxxxxxx> wrote:
>>>>>> On Mon, 24 Oct 2011 17:03:46 +0100 Michael Busby <michael.a.busby@xxxxxxxxx>
>>>>>> wrote:
>>>>>>
>>>>>>> should the speed be very slow when doing this progress, its a lot
>>>>>>> slower than a normal grow
>>>>>>
>>>>>> Yes.
>>>>>> The array is being reshaped in-place.  i.e. data is being read from part of
>>>>>> the array, rearranged, and written back to the same part of the array.
>>>>>> As you can imagine, this is risky - a crash will leave an inconsistent state.
>>>>>> Hence the backup file.  Everything in the array is first written to the
>>>>>> backup file, then back to the array.  So it is slow.
>>>>>>
>>>>>> A "normal" grow is writing to somewhere where there is no valid data, so it
>>>>>> doesn't need the backup.
>>>>>>
>>>>>> I do have a plan to make this faster.... but I have lots of plans and little
>>>>>> time.
>>>>>>
>>>>>> NeilBrown
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> reshape =  1.2% (25006080/1953513984) finish=12481.8min speed=2574K/sec
>>>>>>>
>>>>>>>> On 24 October 2011 15:11, Mathias Burén <mathias.buren@xxxxxxxxx> wrote:
>>>>>>>>> On 24 October 2011 14:11, Michael Busby <michael.a.busby@xxxxxxxxx> wrote:
>>>>>>>>> At the moment i have a raid5 setup with 5 disks, i am looking to add a
>>>>>>>>> 6th disk and change from raid 5 to raid 6
>>>>>>>>>
>>>>>>>>> having looked at Neil's site i have found the following command, and
>>>>>>>>> just want to double check this is still the recommend way of
>>>>>>>>> converting
>>>>>>>>>
>>>>>>>>> mdadm --grow /dev/md0 --level=6 --raid-disks=6 --backup-file=/home/md.backup
>>>>>>>>>
>>>>>>>>> also would i need to add the extra disk before or after the command?
>>>>>>>>>
>>>>>>>>> cheers
>>>>>>>>> --
>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I grew my 6 disk RAID5 to a 7 disk RAID6. First, add the drive. Then
>>>>>>>> partition it as required. Then add the drive to the array (I think
>>>>>>>> it'll become a spare?). Then you can grow it.
>>>>>>>>
>>>>>>>> Make sure you're using the latest mdadm tools available.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Mathias
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux