Re: raid1 becoming raid0 when device is removed before reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2018-09-03 08:48, Guoqing Jiang wrote:
> On 08/31/2018 05:18 PM, Guoqing Jiang wrote:
>> On 08/30/2018 10:32 AM, Niklas Hambüchen wrote:
>>> Is it expected that raid1 turns into raid0 in this way when during a reboot an expected device is not present (e.g. because it is unplugged or was replaced)?
>>> If yes, what is the idea behind that, and why doesn't it go into the normal degraded mode instead?
>>> Is it possible to achieve that? I had hoped that I would be able to continue booting into a degraded system if a disk fails during a reboot (and then be notified of the degradation by mdadm as usual), but this isn't the case if an array comes back as raid0 and inactive after reboot.
>>> Finally, if these topics are already explained somewhere, where can I read more about it?
>>
>> Maybe we need to call do_md_run when assembling an array, need to investigate it.
> 
> It doesn't work, actually the array can be activated by "echo active > /sys/block/md0/md/array_state".

Thank you, this echo worked!
I just confirmed it on another machine.

It immediately brings the array back from the wrong "Raid Level : raid0" into the correct "raid1".

I also noticed that `mdadm --run /dev/md0` has the same effect.

But `mdadm --run --readonly /dev/md0` didn't, it says "/dev/md0 does not appear to be active".)

So remaining question is:

Why does the device appear as raid0 at all?

I would expect it to come back from reboot as a degraded raid1, because that's what it is (and mdadm seems to think so too as soon as you activate it).



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux