cant bring the osds back, thought that ceph replicates data over hosts
not only over osds. so i stopped two OSDs on one host, and deleted the
data/osds, after that i saw the mistake...
On 19.12.2012 22:05, Samuel Just wrote:
Note, however, that it will render the objects previously stored there
permanently lost. Better would be bringing the osds in question back
up.
-Sam
On Wed, Dec 19, 2012 at 1:05 PM, Samuel Just <sam.just@xxxxxxxxxxx> wrote:
ceph pg force_create_pg <pgid> should cause it to be re-created empty.
-Sam
On Wed, Dec 19, 2012 at 12:43 PM, norbi <norbi@xxxxxxxxxx> wrote:
Hi List,
how can i delete non existing PGs ?
the OSDs where the PGs was stored are crashed and now i see this
pg 2.80 is stuck stale for 38971.810705, current state stale+active+clean,
last acting [2,0]
pg 0.82 is stuck stale for 38971.810712, current state stale+active+clean,
last acting [2,0]
pg 1.81 is stuck stale for 38971.810712, current state stale+active+clean,
last acting [2,0]
or the osd.log shows me this
2012-12-19 21:38:36.771016 7f2e5c518700 7 osd.2 3090 hit non-existent pg
0.82
2012-12-19 21:38:36.771024 7f2e5c518700 7 osd.2 3090 we are valid target
for op, waiting
2012-12-19 21:38:36.771026 7f2e5c518700 15 osd.2 3090
require_same_or_newer_map 3086 (i am 3090) 0x17eec00
how can i delete this pgs ?
norbert
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html