Re: Adding additional disks to the production cluster without performance impacts on the existing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi MJ,

Here are the links to the script and config file. Modify the config file as you wish, values in config file can be modified while the script execution is in progress. The script can be run from any monitor or data node. We tested and the script works in our cluster. Test it in your lab before using it in production.

Script Name: osd_crush_reweight.py
Config File Name: rebalance_config.ini

Script: https://jpst.it/1gwrk

Config File: https://jpst.it/1gwsh

--Pardhiv Karri


On Fri, Jun 8, 2018 at 12:20 AM, mj <lists@xxxxxxxxxxxxx> wrote:
Hi Pardhiv,

On 06/08/2018 05:07 AM, Pardhiv Karri wrote:
We recently added a lot of nodes to our ceph clusters. To mitigate lot of problems (we are using tree algorithm) we added an empty node first to the crushmap and then added OSDs with zero weight, made sure the ceph health is OK and then started ramping up each OSD. I created a script to do it dynamically, which will check CPU of the new host with OSDs that

Would you mind sharing this script..?

MJ

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS" 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux