> it is all I get for this osd in logs, when I try to start it. >>> osd 10, 37, 72 are startable >> With those started, I'd repeat the original sequence and get a fresh pg >> query to confirm that it still wants just osd.6. > You mean about procedure with loop and taking down OSDs, which broken > PGs are pointing to ? > pg 1.60 is down+remapped+peering, acting [66,40] > pg 1.165 is down+peering, acting [67,88,48] > for pg 1.60 <--> 66 down, then in loop check pg query ? >> use ceph-objectstore-tool to export the pg from osd.6, stop some other >> ranodm osd (not one of these ones), import the pg into that osd, and start >> again. once it is up, 'ceph osd lost 6'. the pg *should* peer at that >> point. repeat with the same basic process with the other pg. > I have already did 'ceph osd lost 6', do I need to do this once again ? /dev/sdb1 3,7T 34M 3,7T 1% /var/lib/ceph/osd/ceph-6 this disk have no data, they where migrated, when this osd was able to be up. -- Regards, Łukasz Chrustek -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html