best way to delete all OSDs and start over

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am wondering what the best way is of deleting a cluster, removing all
the OSDs, and basically start over. I plan to create a few ceph test
clusters to determine what works best in our use-case. There is no real
data being stored, so I don't care about data-loss.

I have a cephfs setup on top of two pools: data and metadata. Presumably
I can remove this easily with 'ceph fs rm'

1. Do I need to delete the OSD pools?
2. How to I remove the OSDs from the cluster without ceph doing what it
does and rebalancing data between the remaining OSDs?

I read the Manually Remove OSD documentation page,
https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual,
but I want to remove ALL OSDs from the cluster? Is this still the right
set of steps/commands?

Thanks for any insight for a ceph newbie.

PS - If it matters the servers running ceph-mon and ceph-mgr are on
separate computers than the servers running ceph-osd.

Sincerely,
Shawn Kwang
-- 
Associate Scientist
Center for Gravitation, Cosmology, and Astrophysics
University of Wisconsin-Milwaukee
office: +1 414 229 4960

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux