Rebalance after draining - why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good evening dear fellow Ceph'ers,

when removing OSDs from a cluster, we sometimes use

    ceph osd reweight osd.XX 0

and wait until the OSD's content has been redistributed. However, when
then finally stopping and removing it, Ceph is again rebalancing.

I assume this is due to a position that is removed in the CRUSH map and
thus the logical placement is "wrong". (Am I wrong about that?)

I wonder, is there a way to tell ceph properly that a particular OSD is
planned to leave the cluster and to remove the data to the "correct new
position" instead of doing the rebalance dance twice?

Best regards,

Nico

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux