Possibly wrong exit status for mdadm --misc --test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

After reboot raid1 array with one failed drive is reported as degraded (failed drive reported as removed):

> root@rico ~ # mdadm --detail /dev/md127
> /dev/md127:
>            Version : 1.2
>      Creation Time : Thu Feb 21 13:28:21 2019
>         Raid Level : raid1
>         Array Size : 57638912 (54.97 GiB 59.02 GB)
>      Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
>       Raid Devices : 2
>      Total Devices : 1
>        Persistence : Superblock is persistent
>
>        Update Time : Mon Jul 15 07:25:12 2024
>              State : clean, degraded
>     Active Devices : 1
>    Working Devices : 1
>     Failed Devices : 0
>      Spare Devices : 0
>
> Consistency Policy : resync
>
>               Name : sabretooth:root-raid1
>               UUID : 1f1f3113:0b87a325:b9ad1414:0fe55600
>             Events : 323644
>
>     Number   Major   Minor   RaidDevice State
>        -       0        0        0      removed
>        2       8        2        1      active sync   /dev/sda2


However testing such state with mdadm --misc --test returns 0


> root@rico ~ # mdadm --misc --test /dev/md127
> root@rico ~ # echo $?
> 0
> root@rico ~ #


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux