Hello, On Tue, 3 Jan 2017 16:52:16 -0800 Gregory Farnum wrote: > On Wed, Dec 21, 2016 at 2:33 AM, Wido den Hollander <wido@xxxxxxxx> wrote: > > > >> Op 21 december 2016 om 2:39 schreef Christian Balzer <chibi@xxxxxxx>: > >> > >> > >> > >> Hello, > >> > >> I just (manually) added 1 OSD each to my 2 cache-tier nodes. > >> The plan was/is to actually do the data-migration at the least busiest day > >> in Japan, New Years (the actual holiday is January 2nd this year). > >> > >> So I was going to have everything up and in but at weight 0 initially. > >> > >> Alas at the "ceph osd crush add osd.x0 0 host=ceph-0x" steps Ceph happily > >> started to juggle a few PGs (about 7 total) around, despite of course no > >> weight in the cluster changing at all. > >> No harm done (this is the fast and not too busy cache-tier after all), but > >> very much unexpected. > >> > >> So which part of the CRUSH algorithm goes around and pulls weights out of > >> thin air? > >> > > > > It didn't, but the CRUSH topology changed. A CRUSH dev might have a better and detailed explanation, but although the item has a weight of 0 it is still a item to straw(2). > > > > When drawing straws it never gets selected because of the weight of 0, but it is still there. > > > > Same goes when you set the weight of the OSD to 0 and remove it from CRUSH a few days later. That means that you have double rebalance. > > > > In your case it would be best to add the items to CRUSH with the right weight when you want them to start participating. > > I couldn't swear to it but in general this sounds like one of the > straw/straw2/whatever-number-we're-on things where the math wasn't > quite right. I think it behaves properly now if you're running the > newest everything of CRUSH on Kraken (or probably even Jewel?). > -Greg > Good to know, even though I'm just pondering Jewel on my alpha-test cluster. ^o^ Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com