Re: Unable to add OSD after removing completely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for your prompt response, Dear Anthony.

I have fixed the problem.

As I had already removed all the OSDs from my third node, this time I removed the ceph-node3 node from my Ceph Cluster. Then I re-added it as a new cluster node. I followed the following method:

ceph osd crush remove ceph-node3
ceph orch host drain ceph-node3

After the node has been drained of all its services

ceph orch host rm ceph-node3 --offline --force
ceph orch apply osd --all-available-devices --unmanaged=false

Then I logged into the ceph-node3 and disabled and removed all Ceph-related services. Removed all Ceph-related docker images and removed all Ceph-related directories in the OS filesystem, notably /etc/ceph, /var/lib/ceph/ and /var/log/ceph. Then I went back to ceph-node1 where the cephadm orchestrator was installed and added the ceph-node3 again into the Ceph Cluster.

ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node3
ceph orch host add ceph-node3 10.10.10.13

Thus the node was re-added to the cluster, this time all the non-RAID hard drives were automatically added as OSD, and the Cluster was returning to normal state. Currently, the degraded PGs are recovering,

Thank you

> Anthony D'Atri wrote:
> You probably have the H330 HBA, rebadged LSI.  You can set the “mode” or “personality”
> using storcli / perccli.  You might need to remove the VDs from them too.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux