Re: safe segmenting of conflicting changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/26/2010 2:05 PM, Doug Ledford wrote:
> So, the point of raid is to be as reliable as possible, if the disk that
> was once gone is now back, we want to use it if possible.

No, we don't.  I explicitly removed that disk from the array because I
have no wish for it to be there any more.  Maybe I plan on using it in
another array, or maybe I plan on shreding its contents.  Whatever I'm
planning for that disk, it does not involve it being used in a raid
array any more.

> The problem is the cause of this thread, and it's a bug that should be
> fixed, it should not cause us to require things to have an explicit
> --add --force to use a previously failed drive.  This is a case of

Then when the drive fails it should only be marked as failed, not also
removed.  If I manually remove it, then it should stay removed until I
decide to do something else with it.

> The md raid stack makes no distinction between explicit removal or a
> device that disappeared because of a glitch in a USB cable or some such.
>  In both cases the drive is failed and removed.  So the fact that you

Then that's the problem.  If it fails, it should be marked as failed.
If it is removed, it should be marked as removed.  They are two
different actions, that should have different results.  Why on earth the
two flags seem to always be used together is beyond me.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux