I was wondering if someone could help me recover a PG, a few days ago I had a bunch of disks die in a small home-lab cluster. I removed the disks from their hosts, and rm'ed the OSDs. Now I have a PG stuck down, that will not peer whose acting OSDs (and the primary) are one of the OSDs I had rm'ed earlier. I believe generally one would mark the OSD lost to try to get the PG to recovery, however as I rm'ed the OSD earlier I cannot mark it as lost.
ceph health details gives:pg 33.34e is stuck inactive since forever, current state stale+down+remapped+peering, last acting [24,10]
pg 33.34e is stuck unclean since forever, current state stale+down+remapped+peering, last acting [24,10]
pg 33.34e is stale+down+remapped+peering, acting [24,10]
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com