I looked at the section for setting up different pools with different OSD's (e.g SSD Pool):
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
And it seems to make the assumption that the ssd's and platters all live on separate hosts.
Not the case at all for my setup and I imagine for most people I have ssd's mixed with platters on the same hosts.
In that case should one have the root buckets referencing buckets not based on hosts, e.g, something like this:
# devices # Platters device 0 osd.0 device 1 osd.1
# SSD device 2 osd.2 device 3 osd.3
host vnb { id -2 # do not change unnecessarily # weight 1.000 alg straw hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.2 weight 1.000 } host vng { id -3 # do not change unnecessarily # weight 1.000 alg straw hash 0 # rjenkins1 item osd.1 weight 1.000 item osd.3 weight 1.000 }
row disk-platter { alg straw hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.1 weight 1.000 }
row disk-ssd { alg straw hash 0 # rjenkins1 item osd.2 weight 1.000 item osd.3 weight 1.000 }
root default { id -1 # do not change unnecessarily # weight 2.000 alg straw hash 0 # rjenkins1 item disk-platter weight 2.000 }
root ssd { id -4 alg straw hash 0 item disk-ssd weight 2.000 }
# rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit }
rule ssd { ruleset 1 type replicated min_size 0 max_size 4 step take ssd step chooseleaf firstn 0 type host step emit }
-- Lindsay |
Attachment:
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com