>> And now it is very weird.... I made osd.37 up, and loop >> while true;do; ceph tell 1.165 query ;done > Here need to explain more - all I did was start ceph-osd id=37 on > storage node, in ceph osd tree this osd osd is marked as out: > -17 21.49995 host stor8 > 22 1.59999 osd.22 up 1.00000 1.00000 > 23 1.59999 osd.23 up 1.00000 1.00000 > 36 2.09999 osd.36 up 1.00000 1.00000 > 37 2.09999 osd.37 up 0 1.00000 > 38 2.50000 osd.38 up 1.00000 1.00000 > 39 2.50000 osd.39 up 1.00000 1.00000 > 40 2.50000 osd.40 up 0 1.00000 > 41 2.50000 osd.41 down 0 1.00000 > 42 2.50000 osd.42 up 1.00000 1.00000 > 43 1.59999 osd.43 up 1.00000 1.00000 > after start of this osd, ceph tell 1.165 query worked only for one call of this command >> catch this: >> https://pastebin.com/zKu06fJn here is for pg 1.60: https://pastebin.com/Xuk5iFXr >> Can You tell, what is wrong now ? >>>> > use ceph-objectstore-tool to export the pg from osd.6, stop some other >>>> > ranodm osd (not one of these ones), import the pg into that osd, and start >>>> > again. once it is up, 'ceph osd lost 6'. the pg *should* peer at that >>>> > point. repeat with the same basic process with the other pg. >>>> >>>> I have already did 'ceph osd lost 6', do I need to do this once again ? >>> Hmm not sure, if the OSD is empty then there is no harm in doing it again. >>> Try that first since it might resolve it. If not, do the query loop >>> above. >>> s -- Pozdrowienia, Łukasz Chrustek -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html