Re: [ Ceph Failover ] Using the Ceph OSD disks from the failed node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good evening,

On 7/21/21 10:44 AM, Lokendra Rathour wrote:
Hello Everyone,

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operations_guide/index#handling-a-node-failure
* refer to the section * "Replacing the node, reinstalling the operating
system, and using the Ceph OSD disks from the failed node."*

But somehow these steps are not clear and we are not able to retrieve the
Ceph OSD from the old Node.

As I reinstalled all nodes it was actually just a matter of `ceph-volume lvm actuvate --all` and it just picked up all osds it could identify and started them again.

That is the bluestore specific command though.

Ceph Version: Octopus 15.2.7
Ceph-Ansible: 5.0.x
OS: Centos 8.3

Please don't bomb that many mailing lists.

Best regards
Thore
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux