cephadm node failure (re-use OSDs instead of reprovisioning)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

In case of an OS disk failure on a cephadm managed storage node, is there a way to redeploy ceph on the (reinstalled) node leaving the data (OSDs) intact?

So instead of removing the storage node, have the cluster recover, redeploy the storage node, and let the cluster recover, I would like to skip both recovery steps (when all that is broken is an OS disk) and only have a little bit of recovery while the OSDs where down.

With a package based install this is achieved pretty easily: reinstall OS / ceph, make sure ceph keyrings are in right place / right owner (ceph) and run: ceph-volume lvm activate --all ... and you are back in business.

Thanks,

Stefan

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux