There's unfortunately a difference between an osd with weight 0 and removing one item (OSD) from the crush bucket :( If you want to remove the whole cluster completely anyways: either keep it as down+out in the CRUSH map, i.e., just skip the last step. Or just purge the OSD without setting it to out first. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 Am Mo., 3. Dez. 2018 um 16:43 Uhr schrieb <sinan@xxxxxxxx>: > > Hi, > > Currently I am decommissioning an old cluster. > > For example, I want to remove OSD Server X with all its OSD's. > > I am following these steps for all OSD's of Server X: > - ceph osd out <osd> > - Wait for rebalance (active+clean) > - On OSD: service ceph stop osd.<osd> > > Once the steps above are performed, the following steps should be > performed: > - ceph osd crush remove osd.<osd> > - ceph auth del osd.<osd> > - ceph osd rm <osd> > > > What I don't get is, when I perform 'ceph osd out <osd>' the cluster is > rebalancing, but when I perform 'ceph osd crush remove osd.<osd>' it > again starts to rebalance. Why does this happen? The cluster should be > already balanced after out'ed the osd. I didn't expect another rebalance > with removing the OSD from the CRUSH map. > > Thanks! > > Sinan Polat > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com