Updating crush location on all nodes of a cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,

We have a Ceph cluster (running 14.2.2) which has already dozens of TB of data and... we did not set the location of the OSD hosts. The hosts are located in 2 datacenters. We would like to update the locations of all the hosts so not all replicas end up in a single DC.

We are wondering how we should go about this.

1. Changing the locations of all hosts at once

We are worried that this will generate too much IO and network activity (and there is no way to pause / throttle this AFAIK). Maybe this is not actually an issue?

2. Changing the locations of a couple hosts to reduce data movement

We are afraid that if we set 2 hosts to DC1, 2 hosts to DC2 and leave the rest as-is; Ceph will behave as if there are 3 DCs and will try and fill those 4 hosts with as many replicas as possible until they are full.

3. Try and move PGs ahead of the change?

Maybe we could move PGs so that each PG has a replica on an OSD of each DC *before* updating the crush map so that the update does not have to actually move any data? (which would allow us to do this at the desired pace)

4. Something else?

Thank you for your time and your help. :)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux