Re: *****SPAM***** Re: removing osd, reweight 0, backfilling done, after purge, again backfilling.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > I have a clean cluster state, with the osd's that I am going to remove a
> reweight of 0. And then after executing 'ceph osd purge 19', I have again
> remapping+backfilling done?
> >
> > Is this indeed the correct procedure, or is this old?
> > https://docs.ceph.com/en/latest/rados/operations/add-or-rm-
> osds/#removing-osds-manual
> 
> When you either 1) purge an OSD, or 2) ceph osd crush reweight to 0.0
> you change the total weight of the OSD-host, so if you ceph osd
> reweight an OSD, it will push its PGs to other OSDs on the same host
> and empty itself, but that host is now having more PGs than it really
> should. When you do one of the two above steps, the host weight
> becomes corrected and the extra PGs move to other osd hosts. This will
> also affect the total weight of the whole subtree, so other PGs might
> start moving aswell, on hosts not directly related, but this is more
> uncommon.
>

You are right, I did not read my own manual correctly applied the reweight and not the crush reweight.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux