[ Ceph Failover ] Using the Ceph OSD disks from the failed node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Everyone,
We have Ceph Based three Node setup. In this Setup, we want to test the
Complete Node failover and reuse the old OSD Disk from the failed node.
we are referring to the Red-Hat based document:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operations_guide/index#handling-a-node-failure
* refer to the section * "Replacing the node, reinstalling the operating
system, and using the Ceph OSD disks from the failed node."*

But somehow these steps are not clear and we are not able to retrieve the
Ceph OSD from the old Node.

Any know steps to try, please suggest.

Ceph Version: Octopus 15.2.7
Ceph-Ansible: 5.0.x
OS: Centos 8.3

-- 
~ Lokendra
skype: lokendrarathour
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux