Hello, I just (manually) added 1 OSD each to my 2 cache-tier nodes. The plan was/is to actually do the data-migration at the least busiest day in Japan, New Years (the actual holiday is January 2nd this year). So I was going to have everything up and in but at weight 0 initially. Alas at the "ceph osd crush add osd.x0 0 host=ceph-0x" steps Ceph happily started to juggle a few PGs (about 7 total) around, despite of course no weight in the cluster changing at all. No harm done (this is the fast and not too busy cache-tier after all), but very much unexpected. So which part of the CRUSH algorithm goes around and pulls weights out of thin air? Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com