Re: how to swap osds between servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03.09.2018 17:42, Andrei Mikhailovsky wrote:
Hello everyone,

I am in the process of adding an additional osd server to my small ceph cluster as well as migrating from filestore to bluestore. Here is my setup at the moment:

Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates
3 x osd servers with 10x3TB SAS drives, 2 x Intel S3710 200GB ssd and 64GB ram in each server. The same servers are also mon servers.

I am adding the following to the cluster:
1 x osd+mon server with 64GB of ram, 2xIntel S3710 200GB ssds.
Adding 4 x 6TB disks and 2x 3TB disks.

Thus, the new setup will have the following configuration:
4 x osd servers with 8x3TB SAS drives and 1x6TB SAS drive, 2 x Intel S3710 200GB ssd and 64GB ram in each server. This will make sure that all servers have the same amount/capacity drives. There will be 3 mon servers in total.

As a result, I will have to remove 2 x 3TB drives from the existing three osd servers and place them into the new osd server and add a 6TB drive into each osd server. As those 6 x 3TB drives which will be taken from the existing osd servers and placed to the new server will have the data stored on them, what is the best way to do this? I would like to minimise the data migration all over the place as it creates a havoc on the cluster performance. What is the best workflow to achieve the hardware upgrade? If I add the new osd host server into the cluster and physically take the osd disk from one server and place it in the other server, will it be recognised and accepted by the cluster?

Data will migrate no matter how you change the crushmap.  since you want to migrate to bluestore this is also unavoidable.

if it is critical data, and you want to minimize impact, I prefer to do it the slow and steady way of adding a new bluestore drive to the new host, with weight 0 and gradually upping it's weight, while gradually lowering the weight of the filestore drive beeing removed.

a worse option if you do not have a drive to spare for that, is to gradually drain a drive, remove it from the cluster, move it over, zap and recreate as bluestore, and gradually fill it again. but this takes longer, and if you have space issues can be complicated.

an even worse option is to move the osd drive over, (with it's journal and  data), and have the cluster shuffle all the data around, this is a big impact.
And then you are still running filestore. so you still need to migrate to bluestore

kind regards
Ronny Aasen

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux