Hi,
On 04/23/2015 11:18 AM, Jake Grimmett wrote:
Dear All,
I have multiple disk types (15k & 7k) on each ceph node, which I
assign to different pools, but have a problem as whenever I reboot a
node, the OSD's move in the CRUSH map.
i.e. on host ceph4, before a reboot I have this osd tree
-10 7.68980 root 15k-disk
(snip)
-9 2.19995 host ceph4-15k
*snipsnap*
-1 34.96852 root 7k-disk
(snip)
-5 7.36891 host ceph4
*snipsnap*
After a reboot I have this:
-10 5.48985 root 15k-disk
-6 2.19995 host ceph1-15k
32 0.54999 osd.32 up 1.00000 1.00000
33 0.54999 osd.33 up 1.00000 1.00000
34 0.54999 osd.34 up 1.00000 1.00000
35 0.54999 osd.35 up 1.00000 1.00000
-7 0 host ceph2-15k
-8 0 host ceph3-15k
-9 0 host ceph4-15k
-1 37.16847 root 7k-disk
(snip)
-5 9.56886 host ceph4
*snipsnap*
My current cludge, is to just put a series of "osd crush set" lines
like this in rc.local:
ceph osd crush set osd.44 0.54999 root=15k-disk host=ceph4-15k
*snipsnap*
Upon reboot, the OSD updates its location in the crush tree by default.
It uses the hostname of the box if no other information is given (output
of 'hostname -s').
You can either disable updating the location at all or define a custom
location (either fixed or via a script). See the "CRUSH LOCATION"
paragraph on http://docs.ceph.com/docs/master/rados/operations/crush-map/
Best regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com