Hi, I’m slightly confused about one thing we are observing at the moment. We’re testing the shutdown/removal of OSD servers and noticed twice as much backfilling as expected. This is what we did: 1. service ceph stop on some OSD servers. 2. ceph osd out for the above OSDs (to avoid waiting for the down to out timeout) — at this point, backfilling begins and finishes successfully after some time. 3. ceph osd rm all of the above OSDs (leaves OSDs in the crush table, marked DNE) 4. ceph osd crush rm for each of the above OSDs — step 4 triggers another rebalancing!! despite there not being any data on those OSDs and all PGs being previously healthy. Is this expected? Is there a way to avoid the 2nd rebalance? Best Regards, Dan van der Ster CERN IT _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com