Unable to add OSD after removing completely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell PowerEdge R730XD servers. This cluster's hard drives used as OSD were configured as RAID 0. The configuration summery is as the following:
ceph-node1 (mgr, mon)
  Public network:
  Cluster network:,
ceph-node2 (mgr, mon)
  Public network:
  Cluster network:,
ceph-node3 (mon)
  Public network:
  Cluster network:,

Recently I removed all OSDs from node3 with the following set of commands
  sudo ceph osd out osd.3
  sudo systemctl stop ceph@osd.3.service
  sudo ceph osd rm osd.3
  sudo ceph osd crush rm osd.3
  sudo ceph auth del osd.3

After this, I configured all OSD hard drives as non-RAIN from the server settings and tried to add the hard drives as OSD again. First I used the following command to add them automatically
  ceph orch apply osd --all-available-devices --unmanaged=false
But this was generating the following error in my Ceph GUI console
  CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.all-available-devices
I am also unable to add the hard drives manually with the following command
  sudo ceph orch daemon add osd ceph-node3:/dev/sdb

Can anyone please help me with this issue?

I really appreciate any help you can provide.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux