Is it possible (or meaningful) to revive old OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a ten node cluster with about 150 OSDs. One node went down a while back, several months. The OSDs on the node have been marked as down and out since.

I am now in the position to return the node to the cluster, with all the OS and OSD disks. When I boot up the now working node, the OSDs do not start.

Essentially , it seems to complain with "fail[ing]to load OSD map for [various epoch]s, got 0 bytes".

I'm guessing the OSDs on disk maps are so old, they can't get back into the cluster?

My questions are whether it's possible or worth it to try to squeeze these OSDs back in or to just replace them. And if I should just replace them, what's the best way? Manually remove [1] and recreate? Replace [2]? Purge in dashboard?

[1] https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#removing-osds-manual
[2] https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#replacing-an-osd

Many thanks!

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux