Cluster Health error's status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello team

I am running a ceph cluster with 3 monitors and 4 OSDs nodes running 3osd
each , I deployed my ceph cluster using ansible and ubuntu 20.04 as OS ,
the ceph version is Octopus. yesterday , My server which hosts OSDs nodes
restarted because of power issue and to comeback on its status one of the
monitor is out of quorum and some Pg marks as damaged . please help me to
solve this issue. below are health detail status I am finding. and the  4
OSDs node are the same which are running monitors (3 of them).

Best regards.

Michel


root@ceph-mon1:~# ceph health detail
HEALTH_ERR 1/3 mons down, quorum ceph-mon1,ceph-mon3; 14/47195 objects
unfound (0.030%); Possible data damage: 13 pgs recovery_unfound; Degraded
data redundancy: 42/141585 objects degraded (0.030%), 13 pgs degraded; 2
slow ops, oldest one blocked for 322 sec, daemons [osd.0,osd.7] have slow
ops.
[WRN] MON_DOWN: 1/3 mons down, quorum ceph-mon1,ceph-mon3
    mon.ceph-mon4 (rank 2) addr [v2:
10.10.29.154:3300/0,v1:10.10.29.154:6789/0] is down (out of quorum)
[WRN] OBJECT_UNFOUND: 14/47195 objects unfound (0.030%)
    pg 5.77 has 1 unfound objects
    pg 5.6d has 2 unfound objects
    pg 5.6a has 1 unfound objects
    pg 5.65 has 1 unfound objects
    pg 5.4a has 1 unfound objects
    pg 5.30 has 1 unfound objects
    pg 5.28 has 1 unfound objects
    pg 5.25 has 1 unfound objects
    pg 5.19 has 1 unfound objects
    pg 5.1a has 1 unfound objects
    pg 5.1 has 1 unfound objects
    pg 5.b has 1 unfound objects
    pg 5.8 has 1 unfound objects
[ERR] PG_DAMAGED: Possible data damage: 13 pgs recovery_unfound
    pg 5.1 is active+recovery_unfound+degraded+remapped, acting [5,8,7], 1
unfound
    pg 5.8 is active+recovery_unfound+degraded+remapped, acting [6,11,8], 1
unfound
    pg 5.b is active+recovery_unfound+degraded+remapped, acting [7,0,5], 1
unfound
    pg 5.19 is active+recovery_unfound+degraded+remapped, acting [0,5,7], 1
unfound
    pg 5.1a is active+recovery_unfound+degraded, acting [10,11,8], 1 unfound
    pg 5.25 is active+recovery_unfound+degraded+remapped, acting [0,10,11],
1 unfound
    pg 5.28 is active+recovery_unfound+degraded+remapped, acting [6,11,8],
1 unfound
    pg 5.30 is active+recovery_unfound+degraded+remapped, acting [7,5,0], 1
unfound
    pg 5.4a is active+recovery_unfound+degraded, acting [0,11,7], 1 unfound
    pg 5.65 is active+recovery_unfound+degraded+remapped, acting [0,10,11],
1 unfound
    pg 5.6a is active+recovery_unfound+degraded, acting [0,11,7], 1 unfound
    pg 5.6d is active+recovery_unfound+degraded+remapped, acting [7,2,0], 2
unfound
    pg 5.77 is active+recovery_unfound+degraded+remapped, acting [5,6,8], 1
unfound
[WRN] PG_DEGRADED: Degraded data redundancy: 42/141585 objects degraded
(0.030%), 13 pgs degraded
    pg 5.1 is active+recovery_unfound+degraded+remapped, acting [5,8,7], 1
unfound
    pg 5.8 is active+recovery_unfound+degraded+remapped, acting [6,11,8], 1
unfound
    pg 5.b is active+recovery_unfound+degraded+remapped, acting [7,0,5], 1
unfound
    pg 5.19 is active+recovery_unfound+degraded+remapped, acting [0,5,7], 1
unfound
    pg 5.1a is active+recovery_unfound+degraded, acting [10,11,8], 1 unfound
    pg 5.25 is active+recovery_unfound+degraded+remapped, acting [0,10,11],
1 unfound
    pg 5.28 is active+recovery_unfound+degraded+remapped, acting [6,11,8],
1 unfound
    pg 5.30 is active+recovery_unfound+degraded+remapped, acting [7,5,0], 1
unfound
    pg 5.4a is active+recovery_unfound+degraded, acting [0,11,7], 1 unfound
    pg 5.65 is active+recovery_unfound+degraded+remapped, acting [0,10,11],
1 unfound
    pg 5.6a is active+recovery_unfound+degraded, acting [0,11,7], 1 unfound
    pg 5.6d is active+recovery_unfound+degraded+remapped, acting [7,2,0], 2
unfound
    pg 5.77 is active+recovery_unfound+degraded+remapped, acting [5,6,8], 1
unfound
[WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 322 sec, daemons
[osd.0,osd.7] have slow ops.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux