Last working drive in RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

It is interesting to notice that RAID1 won't mark the last working drive as Faulty no matter what. The responsible code seems here:

static void error(struct mddev *mddev, struct md_rdev *rdev)
{
        ...
        /*
         * If it is not operational, then we have already marked it as dead
         * else if it is the last working disks, ignore the error, let the
         * next level up know.
         * else mark the drive as failed
         */
        if (test_bit(In_sync, &rdev->flags)
            && (conf->raid_disks - mddev->degraded) == 1) {
                /*
                 * Don't fail the drive, act as though we were just a
                 * normal single drive.
                 * However don't try a recovery from this drive as
                 * it is very likely to fail.
                 */
                conf->recovery_disabled = mddev->recovery_disabled;
                return;
        }
        ...
}

The end result is that even if all the drives are physically gone, there still one drive remains in array forever, and mdadm continues to report the array is degraded instead of failed. RAID10 also has similar behavior.

Is there any reason we absolutely don't want to fail the last drive of RAID1?

Thanks
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux