If I'm understanding your question correctly that you're asking how to actually remove the SAN osds from ceph, then it doesn't matter what is using the storage (ie openstack, cephfs,
krbd, etc) as the steps are the same.
I'm going to assume that you've already added the new storage/osds to the cluster, weighted the SAN osds to 0.0 and that the backfilling has finished. If that is true, then your disk used space on the SAN's should be basically empty while the new osds on the local disks should have a fair amount of data. If that is the case, then for every SAN osd, you just run the following commands replacing OSD_ID with the osd's id: # On the server with the osd being removed sudo stop ceph-osd id=OSD_ID ceph osd down OSD_ID ceph osd out OSD_ID ceph osd crush remove osd.OSD_ID ceph auth del osd.OSD_ID ceph osd rm OSD_ID Test running those commands on a test osd and if you had set the weight of the osd to 0.0 previously and if the backfilling had finished, then what you should see is that your cluster has 1 less osd than it used to, and no pgs should be backfilling. HOWEVER, if my assumptions above are incorrect, please provide the output of the following commands and try to clarify your question. ceph status ceph osd tree I hope this helps. > Hello David,
>
> Can you help me with steps/Procedure to uninstall Ceph storage from openstack environment?
>
>
> Regards
> Gaurav Goyal
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com