SMART monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What would be the best approach to integrate SMART with ceph, for the predictive failure case?

Assuming you agree with SMART diagnosis of an impending failure, would it be better to automatically start migrating data off the OSD (reduce the weight to 0?), or to just prompt the user to replace the disk (which requires no monitoring on ceph's part)? The former would ensure that redundancy is maintained at all times without any user interaction.

And what about the bad sector case? Assuming you are using something like btrfs with redundant copies of metadata, and assuming that is enough to keep the metadata consistent, what should be done in the case of a small number of fs errors? Can ceph handle getting an i/o error on one of its files inside the osd and just read from the replica, or should the entire osd just be failed and let ceph rebalance the data itself?

Thanks

James
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux