I have an existing cluster where all the hosts were just added directly,
for example:
# ceph osd tree
# id weight type name up/down reweight
-1 60.06 root default
...
-14 1.82 host OSD75
12 1.82 osd.12 up 1
-15 1.82 host OSD80
13 1.82 osd.13 up 1
-16 1.82 host OSD83
14 1.82 osd.14 up 1
-17 1.82 host OSD78
15 1.82 osd.15 up 1
-18 1.82 host OSD82
17 1.82 osd.17 up 1
-19 1.82 host OSD84
16 1.82 osd.16 up 1
Ultimately, I'm trying to reconfigure this so I can withstand a failure
of a rack, without losing data. I found http://dachary.org/?p=2536
which seemed to be pretty helpful in setting this up.
I have the OSDs set to pull their location via 'osd crush location hook'
script. This appears to work properly, as the output properly lists the
configured rack:
=== osd.20 ===
create-or-move updated item name 'osd.20' weight 1.82 at location
{host=OSD81,rack=rack1,root=default} to crush map
Starting Ceph osd.20 on OSD81...
However, I don't see OSD81 appear under the rack in the 'ceph osd tree'
output. The only way I can get this to happen is if I manually move the
OSD, with 'ceph osd crush move OSD81 rack=rack1'.
Am I wrong in expecting that 'create-or-move' would handle this without
manual intervention? If so, what moves will 'create-or-move' handle
automatically? I have enough nodes that manually moving them all to the
proper place would be time consuming.
My initial testing is just trying to get the OSD tree output to be
correct. I have not even begun adjusting any of the crush rules.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com