Hi,
How ceph detect and manage disk failures? What happens if some data are wrote on a bad sector?
Are there any change to get the bad sector "distributed" across the cluster due to the replication?
Is ceph able to remove the OSD bound to the failed disk automatically?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com