The reason is that an osd still contributes to the host weight in the crush map even while it is marked out. When you out and then purge, the purging operation removed the osd from the map and changes the weight of the host which changes the crush map and data moves. By weighting the osd to 0.0, the hosts weight is already the same it will be when you purge the osd. Weighting to 0.0 is definitely the best option for removing storage if you can trust the data on the osd being removed.
On Tue, Feb 26, 2019, 3:19 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
Hi!
Thank you so much!
I do not understand why, but your variant really causes only one rebalance compared to the "osd out".
----- Original Message -----
From: "Scottix" <scottix@xxxxxxxxx>
To: "Fyodor Ustinov" <ufm@xxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Wednesday, 30 January, 2019 20:31:32
Subject: Re: Right way to delete OSD from cluster?
I generally have gone the crush reweight 0 route
This way the drive can participate in the rebalance, and the rebalance
only happens once. Then you can take it out and purge.
If I am not mistaken this is the safest.
ceph osd crush reweight <id> 0
On Wed, Jan 30, 2019 at 7:45 AM Fyodor Ustinov <ufm@xxxxxx> wrote:
>
> Hi!
>
> But unless after "ceph osd crush remove" I will not got the undersized objects? That is, this is not the same thing as simply turning off the OSD and waiting for the cluster to be restored?
>
> ----- Original Message -----
> From: "Wido den Hollander" <wido@xxxxxxxx>
> To: "Fyodor Ustinov" <ufm@xxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Sent: Wednesday, 30 January, 2019 15:05:35
> Subject: Re: Right way to delete OSD from cluster?
>
> On 1/30/19 2:00 PM, Fyodor Ustinov wrote:
> > Hi!
> >
> > I thought I should first do "ceph osd out", wait for the end relocation of the misplaced objects and after that do "ceph osd purge".
> > But after "purge" the cluster starts relocation again.
> >
> > Maybe I'm doing something wrong? Then what is the correct way to delete the OSD from the cluster?
> >
>
> You are not doing anything wrong, this is the expected behavior. There
> are two CRUSH changes:
>
> - Marking it out
> - Purging it
>
> You could do:
>
> $ ceph osd crush remove osd.X
>
> Wait for all good
>
> $ ceph osd purge X
>
> The last step should then not initiate any data movement.
>
> Wido
>
> > WBR,
> > Fyodor.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
T: @Thaumion
IG: Thaumion
Scottix@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com