Re: Updating crush location on all nodes of a cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Martin,

Alright then, we'll just go with the update of every osd's location at once then; just wanted to be sure this was not a problem. :)

On Tue, Oct 22, 2019 at 1:21 PM Martin Verges <martin.verges@xxxxxxxx> wrote:
Hello Alexandre,

maybe you take a look into https://www.youtube.com/watch?v=V33f7ipw9d4 where you can see how easy Ceph CRUSH can be managed.

1. Changing the locations of all hosts at once
We are worried that this will generate too much IO and network activity (and there is no way to pause / throttle this AFAIK). Maybe this is not actually an issue?

Just configure the cluster to allow slow recovery before changing the crush map. Typical options that might help you are "osd recovery sleep hdd|hybrid|ssd" and "osd max backfills".

2. Changing the locations of a couple hosts to reduce data movement
We are afraid that if we set 2 hosts to DC1, 2 hosts to DC2 and leave the rest as-is; Ceph will behave as if there are 3 DCs and will try and fill those 4 hosts with as many replicas as possible until they are full.
 
If you leave any data unsorted, you will never know what data copies are getting unavailable. In fact, you will produce service impact with such setups in case of one data center fails. 
Do you use any EC configuration suitable for 2 DC configurations, or do you use replica and want to tolerate having 2 missing copies at the same time?

3. Try and move PGs ahead of the change?
Maybe we could move PGs so that each PG has a replica on an OSD of each DC *before* updating the crush map so that the update does not have to actually move any data? (which would allow us to do this at the desired pace)

Maybe the PG UPMAP is something that you can use for this, but your cluster hardware and configuration should always be configured to handle workloads like this rebalance without impacting your clients. See 1.

4. Something else?
Thank you for your time and your help. :)

You are welcome as every Ceph user! ;)

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Di., 22. Okt. 2019 um 11:37 Uhr schrieb Alexandre Berthaud <alexandre.berthaud@xxxxxxxxxxxxxxxx>:
Hello everyone,

We have a Ceph cluster (running 14.2.2) which has already dozens of TB of data and... we did not set the location of the OSD hosts. The hosts are located in 2 datacenters. We would like to update the locations of all the hosts so not all replicas end up in a single DC.

We are wondering how we should go about this.

1. Changing the locations of all hosts at once

We are worried that this will generate too much IO and network activity (and there is no way to pause / throttle this AFAIK). Maybe this is not actually an issue?

2. Changing the locations of a couple hosts to reduce data movement

We are afraid that if we set 2 hosts to DC1, 2 hosts to DC2 and leave the rest as-is; Ceph will behave as if there are 3 DCs and will try and fill those 4 hosts with as many replicas as possible until they are full.

3. Try and move PGs ahead of the change?

Maybe we could move PGs so that each PG has a replica on an OSD of each DC *before* updating the crush map so that the update does not have to actually move any data? (which would allow us to do this at the desired pace)

4. Something else?

Thank you for your time and your help. :)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux