Re: Last working drive in RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Fri, 6 Mar 2015 08:52:22 +1100
schrieb NeilBrown <neilb@xxxxxxx>:

> > Or, quite possibly, the device is plugged back in, at which point
> > the device name is there for it (as long as you use the same port,
> > of course).  In which case the filesystem may very well resume
> > successfully.
> 
> I was with you right up to this last point.
> When a device is unplugged and then plugged back in, it will always
> get a new name.

Right, that is what I see when "rotating" disks or connecting to a
docking station. And I did also see the last disk of an external
storage raid not going away (fail) when unplugged.

The times I had devices really break though, I don't think they
triggered an unplug event. They still seemed fully connected but had
motor-start, bus errors or something. In some occasions the failure was
only intermittent, and the device node continued to work again (within
the controller timeout or without a permanent error remaining after a
reset).


So if the last working drive could be marked as failed, when it
actually failed, that would also provide the proper information for
system failover on replicated hosts, in the many cases when a
controller/bus/drive fails without an unplug event. Cases that the udev
rule only idea does not seem to cover.

Regards,
Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux