On Mon, 15 Jul 2024 07:59:36 +0300 Justinas Naruševičius <contact@xxxxxxxxxx> wrote: > Hello, > > After reboot raid1 array with one failed drive is reported as degraded > (failed drive reported as removed): > > > root@rico ~ # mdadm --detail /dev/md127 > > /dev/md127: > > Version : 1.2 > > Creation Time : Thu Feb 21 13:28:21 2019 > > Raid Level : raid1 > > Array Size : 57638912 (54.97 GiB 59.02 GB) > > Used Dev Size : 57638912 (54.97 GiB 59.02 GB) > > Raid Devices : 2 > > Total Devices : 1 > > Persistence : Superblock is persistent > > > > Update Time : Mon Jul 15 07:25:12 2024 > > State : clean, degraded > > Active Devices : 1 > > Working Devices : 1 > > Failed Devices : 0 > > Spare Devices : 0 > > > > Consistency Policy : resync > > > > Name : sabretooth:root-raid1 > > UUID : 1f1f3113:0b87a325:b9ad1414:0fe55600 > > Events : 323644 > > > > Number Major Minor RaidDevice State > > - 0 0 0 removed > > 2 8 2 1 active sync /dev/sda2 > > > However testing such state with mdadm --misc --test returns 0 > > > > root@rico ~ # mdadm --misc --test /dev/md127 > > root@rico ~ # echo $? > > 0 > > root@rico ~ # > > From man page: > > > if the --test option is given, then the exit status > > will be: > > 0 The array is functioning normally. > > 1 The array has at least one failed device. > > 2 The array has multiple failed devices such that it is > > unusable. 4 There was an error while trying to get information about > > the device. > > From --help output: > > > root@rico ~ # mdadm --misc --help| grep test > > --test -t : exit status 0 if ok, 1 if degrade, 2 if dead, 4 if > > missing > > Would expect the exit code to be 1. > > Can anyone confirm this is expected behaviour? > > > root@rico ~ # mdadm -V > > mdadm - v4.3 - 2024-02-15 > > root@rico ~ # > > -- > > Regards, > Justinas Naruševičius > Hello, This is old functionality but from what I can see it has sense if you are sending Manage command like mdadm --remove. This --test command shouldn't be used separately and that is why it is not working for you. Thanks, Mariusz