Hi, I'm doing a few tests on ceph (radosgw more precisely). One of the scenario I'm testing is: - A radogw bucket stored in a rados pool with size=1 (so no replication) - Complete/Irrecoverable failure of an OSD ( osd.0 ) Now obviously in that situation, some of the placement groups will be completely lost and there will be no way to get the data back and I'm OK with that. But my current issue is that after rebuilding a new osd.0 from scratch, the PG that were previously on it and nowhere else are "stuck stale" and I can't figure out how to tell it that it's OK to loose those data but come back to HEALTHY ... I tried doing 'ceph osd lost 0' after I shut it down and before I start it up from scratch again but that didn't change anything. So how can I make the cluster HEALTHY again ? Cheers, Sylvain -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html