Re: Last working drive in RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Thu, 05 Mar 2015 15:00:18 -0500
schrieb Phil Turmel <philip@xxxxxxxxxx>:

> It has to stay there to give errors to the upper layers that are still
> hooked to it.  When they are administratively "unhooked", aka
> unmounted or disassociated with mdadm --remove.
> 
> Or, quite possibly, the device is plugged back in, at which point the
> device name is there for it (as long as you use the same port, of
> course).  In which case the filesystem may very well resume
> successfully.

>From reading this it makes sense that the md device stays there, just
as the the physical device nodes. (to give errors, and to recover)

However, as I understood this thread, md does not seem to inform upper
layers or the user (even not through its own --monitor?) properly.
To me, marking the last disk within an array as failed (*within* the
array) just seems to make more sense, so /proc/mdstat actually
iforms about the md error state (and the md device returning errors on
access).

Regards,
Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux