Hi Everyone, I've got a strange one. After doing a reweight of some osd's the other night our cluster is showing 1pg stuck unclean. 2017-01-25 09:48:41 : 1 pgs stuck unclean | recovery 140/71532872 objects degraded (0.000%) | recovery 2553/71532872 objects misplaced (0.004%) When I query the pg it shows one of the osd's is not up. "state": "active+remapped", "snap_trimq": "[]", "epoch": 231928, "up": [ 155 ], "acting": [ 155, 105 ], "actingbackfill": [ "105", "155" ], I've tried restarting the osd's, ceph pg repair, ceph pg 4.559 list_missing, ceph pg 4.559 mark_unfound_lost revert. Nothing works. I've just tried setting osd.105 out, waiting for backfill to evacuate the osd and stopping the osd process to see if it'll recreate the 2nd set of data but no luck. It would seem that the primary copy of the data on osd.155 is fine but the 2nd copy on osd.105 isn't there. Any ideas how I can force rebuilding the 2nd copy? Or any other ideas to resolve this? We're running Hammer ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90) Regards, Richard _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com