I generally have gone the crush reweight 0 route This way the drive can participate in the rebalance, and the rebalance only happens once. Then you can take it out and purge. If I am not mistaken this is the safest. ceph osd crush reweight <id> 0 On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov <ufm@xxxxxx> wrote: > > Hi! > > But unless after "ceph osd crush remove" I will not got the undersized objects? That is, this is not the same thing as simply turning off the OSD and waiting for the cluster to be restored? > > ----- Original Message ----- > From: "Wido den Hollander" <wido@xxxxxxxx> > To: "Fyodor Ustinov" <ufm@xxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx> > Sent: Wednesday, 30 January, 2019 15:05:35 > Subject: Re: Right way to delete OSD from cluster? > > On 1/30/19 2:00 PM, Fyodor Ustinov wrote: > > Hi! > > > > I thought I should first do "ceph osd out", wait for the end relocation of the misplaced objects and after that do "ceph osd purge". > > But after "purge" the cluster starts relocation again. > > > > Maybe I'm doing something wrong? Then what is the correct way to delete the OSD from the cluster? > > > > You are not doing anything wrong, this is the expected behavior. There > are two CRUSH changes: > > - Marking it out > - Purging it > > You could do: > > $ ceph osd crush remove osd.X > > Wait for all good > > $ ceph osd purge X > > The last step should then not initiate any data movement. > > Wido > > > WBR, > > Fyodor. > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- T: @Thaumion IG: Thaumion Scottix@xxxxxxxxx _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com