Re: removing osd, reweight 0, backfilling done, after purge, again backfilling.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 25, 2022 at 01:05:12PM +0100, Janne Johansson wrote:
> Den fre 25 feb. 2022 kl 13:00 skrev Marc <Marc@xxxxxxxxxxxxxxxxx>:
> > I have a clean cluster state, with the osd's that I am going to remove a reweight of 0. And then after executing 'ceph osd purge 19', I have again remapping+backfilling done?
> >
> > Is this indeed the correct procedure, or is this old?
> > https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#removing-osds-manual
>
> When you either 1) purge an OSD, or 2) ceph osd crush reweight to 0.0
> you change the total weight of the OSD-host, so if you ceph osd
> reweight an OSD, it will push its PGs to other OSDs on the same host
> and empty itself, but that host is now having more PGs than it really
> should. When you do one of the two above steps, the host weight
> becomes corrected and the extra PGs move to other osd hosts. This will
> also affect the total weight of the whole subtree, so other PGs might
> start moving aswell, on hosts not directly related, but this is more
> uncommon.

That's exactly right, and for this reason, when we need to remove an OSD, we
don't use

```
$ ceph osd reweight osd.19 0
```

we use

```
$ ceph osd crush reweight osd.19 0
```

instead. This way, there's no rebalancing when you run `ceph osd purge osd.19`.

The only drawback is if you want to put the OSD back into the cluster without
purging it. With `ceph osd reweight osd.19 0`, you can revert with

```
$ ceph osd reweight osd.19 1
```

But with `ceph osd crush reweight osd.19 0`, you need to know what was the
absolute value of the weight before you changed it, for instance

```
$ ceph osd crush reweight osd.19 10.97220
```

I don't think there's a way to tell Ceph to recompute the value of the weight
based on the size of the disk.

Cheers,

--
Ben

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux