Re: Fwd: Ceph Storage Migration from SAN storage to Local Disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gaurav,

There are several ways to do it depending on how you deployed your ceph cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml ready made to wipe off CEPH.

https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml

You may need to configure ansible inventory with ceph hosts.

Else if you want to purge manually, you can do it using: http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/


Thanks
Bharath

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Gaurav Goyal <er.gauravgoyal@xxxxxxxxx>
Date: Thursday, August 4, 2016 at 8:19 AM
To: David Turner <david.turner@xxxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

Please suggest a procedure for this uninstallation process?


Regards
Gaurav Goyal

On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal <er.gauravgoyal@xxxxxxxxx<mailto:er.gauravgoyal@xxxxxxxxx>> wrote:

Thanks for your  prompt
response!

Situation is bit different now. Customer want us to remove the ceph storage configuration from scratch. Let is openstack system work without ceph. Later on install ceph with local disks.

So I need to know a procedure to uninstall ceph and unconfigure it from  openstack.

Regards
Gaurav Goyal
On 03-Aug-2016 4:59 pm, "David Turner" <david.turner@xxxxxxxxxxxxxxxx<mailto:david.turner@xxxxxxxxxxxxxxxx>> wrote:
If I'm understanding your question correctly that you're asking how to actually remove the SAN osds from ceph, then it doesn't matter what is using the storage (ie openstack, cephfs, krbd, etc) as the steps are the same.

I'm going to assume that you've already added the new storage/osds to the cluster, weighted the SAN osds to 0.0 and that the backfilling has finished.  If that is true, then your disk used space on the SAN's should be basically empty while the new osds on the local disks should have a fair amount of data.  If that is the case, then for every SAN osd, you just run the following commands replacing OSD_ID with the osd's id:

# On the server with the osd being removed
sudo stop ceph-osd id=OSD_ID
ceph osd down OSD_ID
ceph osd out OSD_ID
ceph osd crush remove osd.OSD_ID
ceph auth del osd.OSD_ID
ceph osd rm OSD_ID

Test running those commands on a test osd and if you had set the weight of the osd to 0.0 previously and if the backfilling had finished, then what you should see is that your cluster has 1 less osd than it used to, and no pgs should be backfilling.

HOWEVER, if my assumptions above are incorrect, please provide the output of the following commands and try to clarify your question.

ceph status
ceph osd tree

I hope this helps.

> Hello David,
>
> Can you help me with steps/Procedure to uninstall Ceph storage from openstack environment?
>
>
> Regards
> Gaurav Goyal
________________________________
[cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943<tel:385.224.2943>

________________________________
If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.

________________________________

JPEG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux