On 16-01-11 04:10, Rafael Lopez wrote:
Thanks for the replies guys.
@Steve, even when you remove due to failing, have you
noticed that the cluster rebalances twice using the documented
steps? You may not if you don't wait for the initial recovery
after 'ceph osd out'. If you do 'ceph osd out' and immediately
'ceph osd crush remove', RH support has told me that this
effectively 'cancels' the original move triggered from 'ceph
osd out' and starts permanently remapping... which still
doesn't really explain why we have to do the ceph osd out in
the first place..
It needs to be tested, but I think it may not allow to do crush
remove before doing osd out (e.g. you shouldn't be removing osds
from crush which are in cluster). At least it was the case with up
OSDs when I was doing some testing
@Dan, good to hear it works, I will try that method next
time and see how it goes!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com