Ceph v.15.2.15 (Octopus, stable) - OSD_SCRUB_ERRORS: 6 scrub errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

My Ceph cluster with 3 nodes is showing a HEALTH_ERR, with the following errors:

 * OSD_SCRUB_ERRORS: 6 scrub errors
 * PG_DAMAGED: Possible data damage: 6 pgs inconsistent
 * CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
 * MON_CLOCK_SKEW: clock skew detected on mon.ceph3
 * MON_DOWN: 1/3 mons down, quorum ceph2,ceph3
 * OSD_NEARFULL: 4 nearfull osd(s)
 * PG_NOT_DEEP_SCRUBBED: 2 pgs not deep-scrubbed in time

All OSDs (18) are up though,and I don't see any error in each server's dmesg logs for hard drive issues.

Current cluster's status page is showing: Scrubbing: Active

Is the problem recoverable?


Thanks,

Mike

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux