Re: Rebalance after draining - why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot everyone for your answers!
That is very helpful.

In the future, if anyone is looking at this thread:

It's

       ceph osd crush reweight osd.XX 0

instead of

        ceph osd reweight osd.XX 0

for draining an OSD out of a cluster.

Best regards and many greetings from the Swiss mountains,

Nico


Nico Schottelius <nico.schottelius@xxxxxxxxxxx> writes:

> Good evening dear fellow Ceph'ers,
>
> when removing OSDs from a cluster, we sometimes use
>
>     ceph osd reweight osd.XX 0
>
> and wait until the OSD's content has been redistributed. However, when
> then finally stopping and removing it, Ceph is again rebalancing.
>
> I assume this is due to a position that is removed in the CRUSH map and
> thus the logical placement is "wrong". (Am I wrong about that?)
>
> I wonder, is there a way to tell ceph properly that a particular OSD is
> planned to leave the cluster and to remove the data to the "correct new
> position" instead of doing the rebalance dance twice?
>
> Best regards,
>
> Nico


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux