Greetings,
You need to set the following configuration option under [osd] in your ceph.conf file for your new OSDs.
[osd]
osd_crush_initial_weight = 0
This will ensure your new OSDs come up with a 0 crush weight, thus preventing the automatic rebalance that you see occuring.
Good luck,
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Marco Gaiarin <gaio@xxxxxxxxx>
Sent: Thursday, November 22, 2018 3:22 AM To: ceph-users@xxxxxxxx Subject: New OSD with weight 0, rebalance still happen... Ceph still surprise me, when i'm sure i've fully understood it, something 'strange' (to my knowledge) happen. I need to move out a server of my ceph hammer cluster (3 nodes, 4 OSD per node), and for some reasons i cannot simply move disks. So i've added a new node, and yesterday i've setup the new 4 OSD. In my mind i will add 4 OSD with weight 0, and then slowly i will lower the old OSD weight and increase the weight of the new. I've done before: ceph osd set noin and then added OSD, and (as expected) new OSD start with weight 0. But, despite of the fact that weight is zero, rebalance happen, and using percentage of rebalance 'weighted' to the size of new disk (eg, i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of data start to rebalance). Why? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com