Re: Failure propagation of concatenated raids ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> it
> *might* make sense to look at ceph or some other distributed
> filesystem.

I was trying to avoid that, mainly because that doesn't seem to be as
supported as a more straightforward raids+lvm2 scenario. But I might
be willing to reconsider my position in light of such data losses.

> no filesystem I know handles that without either going
> readonly, or totally locking up.

Which, to be fair, is exactly what I'm looking for. I'd rather see the
filesystem lock itself up, until a human tries to restore the failed
raid back online. But my recent experience and experiments show me
that the filesystems actually don't lock themselves up, and don't go
read only for quite some time, and heavy heavy data corruption will
then happen. I'd be much more happy if the behavior was that the
filesystem locks itself up instead of self destroying over time.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux