No issue at all, this is the advice I was looking for :-) Seems that 'norebalance' will do the trick. Thanks! /Z On Tue, Nov 2, 2021 at 11:24 AM Szabo, Istvan (Agoda) < Istvan.Szabo@xxxxxxxxx> wrote: > What's the issue with adding all osd with noout and norebalance and once > all of them up, unset rebalance? > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo@xxxxxxxxx > --------------------------------------------------- > > -----Original Message----- > From: Etienne Menguy <etienne.menguy@xxxxxxxx> > Sent: Tuesday, November 2, 2021 3:17 PM > To: Zakhar Kirpichenko <zakhar@xxxxxxxxx> > Cc: ceph-users <ceph-users@xxxxxxx> > Subject: Re: Best way to add multiple nodes to a cluster? > > Email received from the internet. If in doubt, don't click any link nor > open any attachment ! > ________________________________ > > Hi, > > I see 2 ways : > Add your OSD with 0 weight, and slowly increase their weight or add OSD 1 > by 1. > It’s easy but “stupid” as some PG will move many times. > > Check > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/OKCWC5KNQF2FD3V4WI2IGMQBGOYY2LL2/ > < > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/OKCWC5KNQF2FD3V4WI2IGMQBGOYY2LL2/ > > > You can ‘link’ current PG to their osd with upmap, add OSD, then slowly > remove upmap so pg move to their right place. > You must be able to use upmap (it requires at least luminous for client > and cluster), it will allow you to move PG only once. > It's a bit more complex than just changing OSD weight. > > You can also change some settings to modify rebalance speed. > > - > Etienne Menguy > etienne.menguy@xxxxxxxx > > > > > > On 2 Nov 2021, at 07:20, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote: > > > > Hi! > > > > I have a 3-node 16.2.6 cluster with 33 OSDs, and plan to add another 3 > > nodes of the same configuration to it. What is the best way to add the > > new nodes and OSDs so that I can avoid a massive rebalance and > > performance hit until all new nodes and OSDs are in place and > operational? > > > > I would very much appreciate any advice. > > > > Best regards, > > Zakhar > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > > email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx