Re: osd crush reweight 0 on "out" OSD causes backfilling?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



An out osd still has a crush weight. Removing that osd or weighting it to 0 will change the weight of the host that it's in. That is why data moves again. There is a thread in the ML started by Sage about possible ways to confront the double data shift when drives fail. Data moving of when it goes out and then again when it is removed from the cluster.

If the drive was still readable when it was marked out, the best method is to weight it to 0 while it is still running so it can be used to offload its data. Also in this method, when you remove it from the cluster, there will not be any additional data movement.


On Tue, Feb 13, 2018, 6:55 AM Christian Sarrasin <c.nntp@xxxxxxxxxxxxxxxxxx> wrote:
Thanks for your input John!  This doesn't really match the doc [1],
which suggests just taking them out and only using "reweight" in case of
issues (with small clusters).

Is "reweight" considered a must before removing and OSD?

Cheers

On 13/02/18 12:34, John Petrini wrote:
> The rule of thumb is to reweight to 0 prior to marking out. This should
> avoid causing data movement twice as you're experiencing.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux