Hi, > I didn't notice before, but since you only have 1 copy of the data, > no other osds will tell the new osd.0 that it should have those pgs. > That's why they're stale. You can force osd.0 to notice the pgs it needs > by running: > > ceph pg force_create_pg <pgid> > > for each pg that's mapped to osd.0 in 'ceph pg dump'. > > Once those are all created, and recovery has noticed that those pgs are > all out of date, the mark_unfound_lost command should work. Thanks, it worked. It didn't go through the "unfound" step though. For the pg that were empty it was good directly and for the pg that had object in them, it went to unclean/inactive for a few minutes and then they were back as just empty pg ... and the cluster was HEALTHY again. Radosgw however doesn't seem to handle the situation too well because the bucket that had data in this pg still appears in a 'list' but you can't delete or list and stat it ... Cheers, Sylvain -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html