Re: mdadm --fail doesn't mark device as failed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2012-11-21 at 18:47 +0100, Sebastian Riemer wrote:
> > OK.  So if I understand correctly, mdadm -fail has no effect that
> > persists past a reboot, and doesn't write to disk anything that
> would
> > prevent the use of the failed RAID component.(*)  But if I write to
> > sysfs, the failure wil persist across reboots.
> >
> > This behavior is quite surprising to me.  Is there some reason for
> this
> > design?
> 
> Yes, sometimes hardware has only a short issue and operates as
> expected
> afterwards. Therefore, there is an error threshold. It could be very
> annoying to zero the superblock and to resync everything only because
> there was a short controller issue or something similar. Without this
> you also couldn't remove and re-add devices for testing.
BTW, the part that was surprising was not that the device could be
re-added, but that it was re-added automatically on reboot.

At the moment I have quite a few partitions running around with the same
md UUID but slightly different information on them.
Ross

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux