When I use ceph-deploy to add a bunch of new OSDs (from a new machine), the ceph cluster starts rebalancing immediately; as a result, the first couple OSDs are started properly; but the last few can't start because I keep getting a "timeout problem", as shown here:
[root@ia6 ia_scripts]# service ceph start osd.24
=== osd.24 ===
failed: 'timeout 10 /usr/bin/ceph --name=osd.24 --keyring=/var/lib/ceph/osd/ceph-24/keyring osd crush create-or-move -- 24 1.82 root=default host=ia6
Is there a way I can pause the "recovery" so that the overall system behaves way faster and I can then start all the OSDs, make sure they're up and they look "normal" (via ceph osd tree) , and then unpause recovery?
-Sid
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com