Hi Kevin, I don?t know about those flags, but if you want to shrink your cluster you can simply set the weight of the OSDs to be removed to 0 like so: ?ceph osd reweight osd.X 0? You can either do it gradually if your are concerned about client I/O (probably not since you speak of a test / semi prod cluster) or all at once. This should take care of all the data movements. Once the cluster is back to HEALTH_OK, you can then proceed with the standard remove OSD procedure: http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual You should be able to delete all the OSDs in a short period of time since the data movement has already been taken care of with the reweight. I hope that helps. Cheers, Maxime From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Kevin Olbrich <ko at sv01.de> Date: Wednesday 8 March 2017 14:39 To: "ceph-users at lists.ceph.com" <ceph-users at lists.ceph.com> Subject: Shrinking lab cluster to free hardware for a new deployment Hi! Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each). We want to shut down the cluster but it holds some semi-productive VMs we might or might not need in the future. To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we use size 2 and min_size 1). Should I set the OSDs out one by one or with norefill, norecovery flags set but all at once? If last is the case, which flags should be set also? Thanks! Kind regards, Kevin Olbrich. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170308/89eab9d6/attachment.htm>