Hi, Running Luminous 12.2.2, noticed strange behavior lately. When for example setting "ceph osd out X" closer to the reballancing end "degraded" objects still show up, but in "pgs:" section of ceph -s no degraded pgs are still recovering, just ramapped and no degraded pgs can be found in "ceph pg dump" health: HEALTH_WARN 355767/30286841 objects misplaced (1.175%) Degraded data redundancy: 28/30286841 objects degraded (0.000%), 96 pgs unclean services: ... osd: 38 osds: 38 up, 37 in; 96 remapped pgs data: pools: 19 pools, 4176 pgs objects: 9859k objects, 39358 GB usage: 114 TB used, 120 TB / 234 TB avail pgs: 28/30286841 objects degraded (0.000%) 355767/30286841 objects misplaced (1.175%) 4080 active+clean 81 active+remapped+backfilling 15 active+remapped+backfill_wait Where those 28 degraded objects come from? In such cases usually when backfilling is done degraded objects also disappear, but normally degraded objects should fix before remapped ones by priority. Ugis -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html