Dear All,
I have multiple disk types (15k & 7k) on each ceph node, which I assign
to different pools, but have a problem as whenever I reboot a node, the
OSD's move in the CRUSH map.
i.e. on host ceph4, before a reboot I have this osd tree
-10 7.68980 root 15k-disk
(snip)
-9 2.19995 host ceph4-15k
44 0.54999 osd.44 up 1.00000 1.00000
45 0.54999 osd.45 up 1.00000 1.00000
46 0.54999 osd.46 up 1.00000 1.00000
47 0.54999 osd.47 up 1.00000 1.00000
(snip)
-1 34.96852 root 7k-disk
(snip)
-5 7.36891 host ceph4
24 0.90999 osd.24 up 1.00000 1.00000
25 0.90999 osd.25 up 1.00000 1.00000
26 0.90999 osd.26 down 0 1.00000
27 0.90999 osd.27 up 1.00000 1.00000
28 0.90999 osd.28 up 1.00000 1.00000
29 0.90999 osd.29 up 1.00000 1.00000
31 0.90999 osd.31 up 1.00000 1.00000
30 0.99899 osd.30 up 1.00000 1.00000
After a reboot I have this:
-10 5.48985 root 15k-disk
-6 2.19995 host ceph1-15k
32 0.54999 osd.32 up 1.00000 1.00000
33 0.54999 osd.33 up 1.00000 1.00000
34 0.54999 osd.34 up 1.00000 1.00000
35 0.54999 osd.35 up 1.00000 1.00000
-7 0 host ceph2-15k
-8 0 host ceph3-15k
-9 0 host ceph4-15k
-1 37.16847 root 7k-disk
(snip)
-5 9.56886 host ceph4
24 0.90999 osd.24 up 1.00000 1.00000
25 0.90999 osd.25 up 1.00000 1.00000
26 0.90999 osd.26 down 0 1.00000
27 0.90999 osd.27 up 1.00000 1.00000
28 0.90999 osd.28 up 1.00000 1.00000
29 0.90999 osd.29 up 1.00000 1.00000
31 0.90999 osd.31 up 1.00000 1.00000
30 0.99899 osd.30 up 1.00000 1.00000
44 0.54999 osd.44 up 1.00000 1.00000
46 0.54999 osd.46 up 1.00000 1.00000
47 0.54999 osd.47 up 1.00000 1.00000
45 0.54999 osd.45 up 1.00000 1.00000
My current cludge, is to just put a series of "osd crush set" lines like
this in rc.local:
ceph osd crush set osd.44 0.54999 root=15k-disk host=ceph4-15k
but presumably this is not the right solution...
I'm using hammer (0.94.1) on Scientific Linux 6.6
Full details on how I added OSD and edited CRUSH map are here:
http://pastebin.com/R2yaab8m
Many thanks!
Jake
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com