> Op 21 december 2016 om 2:39 schreef Christian Balzer <chibi@xxxxxxx>: > > > > Hello, > > I just (manually) added 1 OSD each to my 2 cache-tier nodes. > The plan was/is to actually do the data-migration at the least busiest day > in Japan, New Years (the actual holiday is January 2nd this year). > > So I was going to have everything up and in but at weight 0 initially. > > Alas at the "ceph osd crush add osd.x0 0 host=ceph-0x" steps Ceph happily > started to juggle a few PGs (about 7 total) around, despite of course no > weight in the cluster changing at all. > No harm done (this is the fast and not too busy cache-tier after all), but > very much unexpected. > > So which part of the CRUSH algorithm goes around and pulls weights out of > thin air? > It didn't, but the CRUSH topology changed. A CRUSH dev might have a better and detailed explanation, but although the item has a weight of 0 it is still a item to straw(2). When drawing straws it never gets selected because of the weight of 0, but it is still there. Same goes when you set the weight of the OSD to 0 and remove it from CRUSH a few days later. That means that you have double rebalance. In your case it would be best to add the items to CRUSH with the right weight when you want them to start participating. Wido > Christian > -- > Christian Balzer Network/Systems Engineer > chibi@xxxxxxx Global OnLine Japan/Rakuten Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com