Hi Dan, I'm using Emperor (0.72). Though I would think CRUSH maps have not changed that much btw versions? > That sounds bizarre to me, and I can't reproduce it. I added an osd (which > was previously not in the crush map) to a fake host=test: > > ceph osd crush create-or-move osd.52 1.0 rack=RJ45 host=test I have flatter failure domain with only servers/drives. Looks like you would have at least rack/server/drive. Would that make the difference? > As far as I've experienced, an entry in the crush map with a _crush_ weight > of zero is equivalent to that entry not being in the map. (In fact, I use > this to drain OSDs ... I just ceph osd crush reweight osd.X 0, then > sometime later I crush rm the osd, without incurring any secondary data > movement). Is the crush weight the second column of ceph osd tree ? I'll have to pay attention to that next time I drain a node. Thanks for investigating! Chad. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com