Re: raid1 becoming raid0 when device is removed before reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/5/18 3:35 PM, Guoqing Jiang wrote:


On 11/15/18 5:40 AM, Niklas Hambüchen wrote:
On 2018-09-03 08:48, Guoqing Jiang wrote:
On 08/31/2018 05:18 PM, Guoqing Jiang wrote:
On 08/30/2018 10:32 AM, Niklas Hambüchen wrote:
Is it expected that raid1 turns into raid0 in this way when during a reboot an expected device is not present (e.g. because it is unplugged or was replaced)? If yes, what is the idea behind that, and why doesn't it go into the normal degraded mode instead? Is it possible to achieve that? I had hoped that I would be able to continue booting into a degraded system if a disk fails during a reboot (and then be notified of the degradation by mdadm as usual), but this isn't the case if an array comes back as raid0 and inactive after reboot. Finally, if these topics are already explained somewhere, where can I read more about it?
Maybe we need to call do_md_run when assembling an array, need to investigate it.
It doesn't work, actually the array can be activated by "echo active > /sys/block/md0/md/array_state".
Thank you, this echo worked!
I just confirmed it on another machine.

It immediately brings the array back from the wrong "Raid Level : raid0" into the correct "raid1".

I also noticed that `mdadm --run /dev/md0` has the same effect.

But `mdadm --run --readonly /dev/md0` didn't, it says "/dev/md0 does not appear to be active".)

So remaining question is:

Why does the device appear as raid0 at all?

Quote from previous reply,

"To my knowledge, the output of raid level shows raid0 maybe caused by below in set_array_info.

        memset(&inf, 0, sizeof(inf));
        inf.major_version = info->array.major_version;
        inf.minor_version = info->array.minor_version;
        rv = md_set_array_info(mdfd, &inf);

And mdadm only calls two ioctls (SET_ARRAY_INFO and ADD_NEW_DISK) to the array during
the reboot stage. "

I would expect it to come back from reboot as a degraded raid1, because that's what it is (and mdadm seems to think so too as soon as you activate it).

I am not sure if the below works as expected, but pls try it.

diff --git a/util.c b/util.c
index c26cf5f3f78b..e17d647892f4 100644
--- a/util.c
+++ b/util.c
@@ -1919,6 +1919,7 @@ int set_array_info(int mdfd, struct supertype *st, struct mdinfo *info)
         * and older kernels
         */
        mdu_array_info_t inf;
+       mdu_array_info_t old_inf;
        int rv;

        if (st->ss->external)
@@ -1927,6 +1928,13 @@ int set_array_info(int mdfd, struct supertype *st, struct mdinfo *info)
        memset(&inf, 0, sizeof(inf));
        inf.major_version = info->array.major_version;
        inf.minor_version = info->array.minor_version;
+
+       rv = ioctl(mdfd, GET_ARRAY_INFO, &old_inf);
+       if (rv)
+               return rv;
+       /* use the correct level not 0 based on GET_ARRAY_INFO ioctl */
+       inf.level = old_inf.level;
+
        rv = md_set_array_info(mdfd, &inf);

Hmm, perhaps we need to read the level from super block since it is in reboot stage.

Thanks,
Guoqing



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux