> On my test cluster, some PGs are stuck unclean forever (pool 24, size=2). > > Directory /var/lib/ceph/osd/ceph-X/current/24.126_head/ is empty on all OSDs. > > Any idea what is wrong? And how can I recover from that state? The interesting thing is that all OSDs are up, and those PGs does not list any unfound objects. So I can't use ' ceph pg X.Y mark_unfound_lost revert'. So how can I tell the cluster to continue? - currently all ops are blocked. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com