existing ceph cluster - clean start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am planning to make some changes to our ceph cluster and would like to ask the community of the best route to take.

Our existing cluster is made of 3 osd servers (two of which are also mon servers) and the total of 3 mon servers. The cluster is currently running on Ubuntu 14.04.x LTS. Due to the historical testing, troubleshooting and cluster setup, the servers are not really uniformly done (software wise) and I would like to standardise as many things as possible. I am slowly migrating to Saltstack for infrastructure management and would like to manage my ceph cluster with Salt as well.My initial thoughts are to start with a clean Ubuntu 16.04 server install, connect it to salt server and manage all software installs through salt. This will make sure that all servers would be pretty much standard in terms of software.

My question is what is the best way to migrate the existing cluster without having downtime? Should I os wipe one of the osd servers (wipe the osd disk and not the osd/journal disks), install OS with salt and point ceph to the existing osds? After that, do the same with the second osd server and finally with the third one.

Is ceph smart enough to figure out that the osds belong to an existing cluster and join the reinstalled osd server to the cluster? If this can be done, I assume this is the fastest way to achieve this. If not, what is the best route to take?

Many thanks

Andrei
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux