On 2019/9/2 15:34, NeilBrown wrote:
On Mon, Sep 02 2019, Yufen Yu wrote:
When active disk in raid1 array less than one, we need to return
fail to run.
Seems reasonable, but how can this happen?
As we never fail the last device in a RAID1, there should always
appear to be one that is working.
Have you had a situation where this in actually needed?
There is a situation we found in follow patch.
https://marc.info/?l=linux-raid&m=156740736305042&w=2
Though we can fix that situation, I am not sure whether other situation
can also cause the active disk less than one.
Thanks
Yufen
Thanks,
NeilBrown
Signed-off-by: Yufen Yu <yuyufen@xxxxxxxxxx>
---
drivers/md/raid1.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 34e26834ad28..2a554464d6a4 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3127,6 +3127,13 @@ static int raid1_run(struct mddev *mddev)
!test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
test_bit(Faulty, &conf->mirrors[i].rdev->flags))
mddev->degraded++;
+ /*
+ * RAID1 needs at least one disk in active
+ */
+ if (conf->raid_disks - mddev->degraded < 1) {
+ ret = -EINVAL;
+ goto abort;
+ }
if (conf->raid_disks - mddev->degraded == 1)
mddev->recovery_cp = MaxSector;
@@ -3160,8 +3167,12 @@ static int raid1_run(struct mddev *mddev)
ret = md_integrity_register(mddev);
if (ret) {
md_unregister_thread(&mddev->thread);
- raid1_free(mddev, conf);
+ goto abort;
}
+ return 0;
+
+abort:
+ raid1_free(mddev, conf);
return ret;
}
--
2.17.2