You probably have the H330 HBA, rebadged LSI. You can set the “mode” or “personality” using storcli / perccli. You might need to remove the VDs from them too. > On Feb 12, 2024, at 7:53 PM, salam@xxxxxxxxxxxxxx wrote: > > Hello, > > I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell PowerEdge R730XD servers. This cluster's hard drives used as OSD were configured as RAID 0. The configuration summery is as the following: > ceph-node1 (mgr, mon) > Public network: 172.16.7.11/22 > Cluster network: 10.10.10.11/24, 10.10.10.14/24 > ceph-node2 (mgr, mon) > Public network: 172.16.7.12/22 > Cluster network: 10.10.10.12/24, 10.10.10.15/24 > ceph-node3 (mon) > Public network: 172.16.7.13/22 > Cluster network: 10.10.10.13/24, 10.10.10.16/24 > > Recently I removed all OSDs from node3 with the following set of commands > sudo ceph osd out osd.3 > sudo systemctl stop ceph@osd.3.service > sudo ceph osd rm osd.3 > sudo ceph osd crush rm osd.3 > sudo ceph auth del osd.3 > > After this, I configured all OSD hard drives as non-RAIN from the server settings and tried to add the hard drives as OSD again. First I used the following command to add them automatically > ceph orch apply osd --all-available-devices --unmanaged=false > But this was generating the following error in my Ceph GUI console > CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.all-available-devices > I am also unable to add the hard drives manually with the following command > sudo ceph orch daemon add osd ceph-node3:/dev/sdb > > Can anyone please help me with this issue? > > I really appreciate any help you can provide. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx