On 11/18/21 11:05, David Tinker wrote:
Hi Guys I am busy removing an OSD from my rook-ceph cluster. I did 'ceph osd out osd.7' and the re-balancing process started. Now it has stalled with one pg on "active+undersized+degraded". I have done this before and it has worked fine. # ceph health detail HEALTH_WARN Degraded data redundancy: 15/94659 objects degraded (0.016%), 1 pg degraded, 1 pg undersized [WRN] PG_DEGRADED: Degraded data redundancy: 15/94659 objects degraded (0.016%), 1 pg degraded, 1 pg undersized pg 3.1f is stuck undersized for 2h, current state active+undersized+degraded, last acting [0,2]
Can you run: ceph pg 3.1f query Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx