How you handle failing/slow disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

it's not first time we have this kind of problem, usually with HP raid controllers:

1. One disk is failing, bringing all controller to slow state, where it's performance degrades dramatically
2. Some OSDs are reported as down by other OSDs and marked as down
3. At same time other OSDs on same node are not detected as failed and are still participating in cluster. I think, it's because OSD is not aware about backend disk problems and answers to health checks
4. Because of this, requests to PGs, which are on problematic node, are becoming "slow", later becoming "stuck"
5. Cluster is struggling and client operations are not performed, so cluster is in some kind "locked" state
6. We need to mark them down manually (or stop problematic daemons), so cluster starts to recover and process requests

Is there any mechanism in Ceph, which monitors slow request containing OSDs and mark them down after some kind of threshold?

Thanks,
Arvydas
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux