> The only way I know to recover is to create a new filesystem in the cluster :-) > But it's bad fot the data :-) > > When i get problems with one osd it seems as if they are crashing one by one. > And i dont know how to get them up again whitout deleting all the data. > Which version of ceph are you running? We had similar problem before. I would like to know if recent fixes for osd recovery (>= v0.25.1) have already resolved this problem. Thanks, Henry -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html