Re: Moving OSD node from root bucket to defined 'rack' bucket

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would still always recommend having at least having n+1 failure domain in any production cluster where n is your replica size.


On Tue, Jul 18, 2017, 11:20 PM David Turner <drakonstein@xxxxxxxxx> wrote:

You do not need to empty the host before moving it in the crush map.  It will just cause data movement because you are removing an item under root and changing the crush weight of the rack.  There is no way I am aware of to really ease into this data movement other than to stare it head on and utilize osd_max_backfills to control disk io in your cluster.

Are you changing you're failure domain to rack from host after this is done? Changing that in the crush map will cause everything to peer at once and then instigate a lot of data movement. You can do both moving the hosts into their racks and change the failure domain in the same update to only move data once. You would do that by downloading the crush map, modifying it, and then uploading it back into the cluster. It would be smart to test this on a test cluster. You could even do it on a 3 node cluster by changing each node to its own rack and setting the failure domain to rack.


On Tue, Jul 18, 2017, 7:06 PM Mike Cave <mcave@xxxxxxx> wrote:

Greetings,

 

I’m trying to figure out the best way to move our hosts from the root=default bucket into their rack buckets.

 

Our crush map has the notion of three racks which will hold all of our osd nodes.

 

As we have added new nodes, we have assigned them to their correct rack location in the map. However, when the cluster was first conceived, the majority of the nodes were left in the default bucket.

 

Now I would like to move them into their correct rack buckets.

 

I have a feeling I know the answer to this question, but I thought I’d ask and hopefully be pleasantly surprised.

 

Can I move a host from the root bucket into the correct rack without draining it and then refilling it or do I need to reweight the host to 0, move the host to the correct bucket, and then reweight it back to it’s correct value?

 

Any insights here will be appreciated.

 

Thank you for your time,

Mike Cave

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux