Re: MD devnode still present after 'remove' udev event, and mdadm reports 'does not appear to be active'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Neil.
To end this long email thread: what is "more important": update time
or event count? Or perhaps they are updated simultaneously?

Thanks,
  Alex.



On Thu, Oct 20, 2011 at 1:56 AM, NeilBrown <neilb@xxxxxxx> wrote:
> On Wed, 19 Oct 2011 14:01:16 +0200 Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
> wrote:
>
>> Thanks, Neil.
>> I experimented with --force switch, and I saw that when using this
>> switch it is possible to start the array, even though I am sure that
>> the data will be corrupted. Such as selecting stale drives (which have
>> been replaced previously etc.)
>> Can I have some indication that it is "relatively safe" to start the
>> array with --force?
>> For example, in the case of "dirty degraded", perhaps it might be
>> relatively safe.
>>
>> What should I look at? The output of --examine? Or something else?
>
> Yes, look at the output of examine.  Look particularly at update time and
> event counts, but also at RAID raid level etc and the role in the array
> played by each device.
>
> Then choose the set of devices that you should are most likely to have
> current data and given them to "mdadm --assemble --force".
>
> Obviously if one device hasn't been updated for months, that is probably a
> bad choice, while if one device is only a few minutes behind the others, then
> that is probably a good choice.
>
> Normally there isn't much choice to be made, and the answer will be obvious.
> But if you let devices fail and leave them lying around, or don't replace
> them, then that can cause problems.
>
> If you need to use --force  there might be some corruption.  Or there might
> be none.  And there could be a lot.  But mdadm has know way of knowing.
> Usually mdadm will do the best that is possible, but it cannot know how good
> that is.
>
> NeilBrown
>
>
>
>>
>> Thanks,
>>   Alex.
>>
>>
>> On Wed, Oct 12, 2011 at 5:45 AM, NeilBrown <neilb@xxxxxxx> wrote:
>> > On Tue, 11 Oct 2011 15:11:47 +0200 Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
>> > wrote:
>> >
>> >> Hello Neil,
>> >> can you please confirm for me something?
>> >> In case the array is FAILED (when your enough() function returns 0) -
>> >> for example, after simultaneous failure of all drives - then the only
>> >> option to try to recover such array is to do:
>> >> mdadm --stop
>> >> and then attempt
>> >> mdadm --assemble
>> >>
>> >> correct?
>> >
>> > Yes, though you will probably want a --force as well.
>> >
>> >>
>> >> I did not see any other option to recover such array Incremental
>> >> assemble doesn't work in that case, it simply adds back the drives as
>> >> spares.
>> >
>> > In recent version of mdadm it shouldn't add them as spare.  It should say
>> > that it cannot add it and give up.
>> >
>> > NeilBrown
>> >
>> >
>> >
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux