Re: Cluster under stress - flapping OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 10/12/20 12:05 PM, Kristof Coucke wrote:
Diving into the different logging and searching for answers, I came across
the following:

PG_DEGRADED Degraded data redundancy: 2101057/10339536570 objects degraded
(0.020%), 3 pgs degraded, 3 pgs undersized
     pg 1.4b is stuck undersized for 63114.227655, current state
active+undersized+degraded+remapped+backfilling, last acting
[62,20,33,25,97,2,159,2147483647,88]
     pg 1.115 is stuck undersized for 67017.759147, current state
active+undersized+degraded+remapped+backfilling, last acting
[2147483647,6,28,48,171,160,51,7,84]
     pg 1.1ec is stuck undersized for 67017.772311, current state
active+undersized+degraded+remapped+backfilling, last acting
[65,82,2147483647,161,6,36,105,106,48]

Note the PG# 2147483647... That doesn't seem correct.
Any ideas?


The list contains the OSDs associated with that PG. The number you mentioned is -1 as unsigned integer, meaning that no OSD could be associated.


Do you have enough hosts to satisfy the crush rules?


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux