Re: Using Ceph Ansible to Add Nodes to Cluster at Weight 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mike,

there is no problem adding 100 OSDs at the same time if your cluster is configured correctly.
Just add the OSDs and let the cluster slowly (as fast as your hardware supports without service interruption) rebalance.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Do., 30. Mai 2019 um 02:00 Uhr schrieb Mike Cave <mcave@xxxxxxx>:

Good afternoon,

 

I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per node) and am trying to determine the best way to go about this task.

 

I deployed the cluster with ceph ansible and everything worked well. So I’d like to add the new nodes with ceph ansible as well.

 

The issue I have is adding that many OSDs at once will likely cause a huge issue with the cluster if they come in fully weighted.

 

I was hoping to use ceph ansible and set the initial weight to zero and then gently bring them up to the correct weight for each OSD.

 

I will be doing this with a total of 380 OSDs over the next while. My plan is to bring in groups of 6 nodes (I have six racks and the map is rack-redundant) until I’m completed on the additions.

 

In dev I tried bringing in a node while the cluster was in ‘no rebalance’ mode and there was still significant movement with some stuck pgs and other oddities until I reweighted and then unset ‘no rebalance’.

 

I’d like a s little friction for the cluster as possible as it is in heavy use right now.

 

I’m running mimic (13.2.5) on CentOS.

 

Any suggestions on best practices for this?

 

Thank you for reading and any help you might be able provide. I’m happy to provide any details you might want.

 

Cheers,

Mike

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux