Hi,I have created a separate root for my ssd drivesAll works well but a reboot ( or restart of the services) wipes out all my changesHow can I persist changes to crush rules ?here are some detailsInitial / default - this is what I am getting after a restart / rebootIf I just do that on one server, the crush rules specific that that server will be revertedThe new root ( ssds) will persist thoughceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-16 0 root ssds-17 0 host osd01-ssd-18 0 host osd02-ssd-19 0 host osd03-ssd-20 0 host osd04-ssd-1 32.63507 root default-3 8.15877 host osd014 hdd 1.85789 osd.4 up 1.00000 1.000005 hdd 1.85789 osd.5 up 1.00000 1.000006 hdd 1.85789 osd.6 up 1.00000 1.000007 hdd 1.85789 osd.7 up 1.00000 1.000000 ssd 0.72719 osd.0 up 1.00000 1.00000-5 8.15877 host osd028 hdd 1.85789 osd.8 up 1.00000 1.000009 hdd 1.85789 osd.9 up 1.00000 1.0000010 hdd 1.85789 osd.10 up 1.00000 1.0000011 hdd 1.85789 osd.11 up 1.00000 1.000001 ssd 0.72719 osd.1 up 1.00000 1.00000-7 8.15877 host osd0312 hdd 1.85789 osd.12 up 1.00000 1.0000013 hdd 1.85789 osd.13 up 1.00000 1.0000014 hdd 1.85789 osd.14 up 1.00000 1.0000015 hdd 1.85789 osd.15 up 1.00000 1.000002 ssd 0.72719 osd.2 up 1.00000 1.00000-9 8.15877 host osd0416 hdd 1.85789 osd.16 up 1.00000 1.0000017 hdd 1.85789 osd.17 up 1.00000 1.0000018 hdd 1.85789 osd.18 up 1.00000 1.0000019 hdd 1.85789 osd.19 up 1.00000 1.000003 ssd 0.72719 osd.3 up 1.00000 1.00000changes madeceph osd crush add 0 0.72719 root=ssds
ceph osd crush set osd.0 0.72719 root=ssds host=osd01-ssd
ceph osd crush add 1 0.72719 root=ssds
ceph osd crush set osd.1 0.72719 root=ssds host=osd02-ssd
ceph osd crush add 2 0.72719 root=ssds
ceph osd crush set osd.2 0.72719 root=ssds host=osd03-ssd
ceph osd crush add 3 0.72719 root=ssds
ceph osd crush set osd.3 0.72719 root=ssds host=osd04-ssd
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-16 2.90875 root ssds
-17 0.72719 host osd01-ssd
0 ssd 0.72719 osd.0 up 1.00000 1.00000
-18 0.72719 host osd02-ssd
1 ssd 0.72719 osd.1 up 1.00000 1.00000
-19 0.72719 host osd03-ssd
2 ssd 0.72719 osd.2 up 1.00000 1.00000
-20 0.72719 host osd04-ssd
3 ssd 0.72719 osd.3 up 1.00000 1.00000
-1 29.72632 root default
-3 7.43158 host osd01
4 hdd 1.85789 osd.4 up 1.00000 1.00000
5 hdd 1.85789 osd.5 up 1.00000 1.00000
6 hdd 1.85789 osd.6 up 1.00000 1.00000
7 hdd 1.85789 osd.7 up 1.00000 1.00000
-5 7.43158 host osd02
8 hdd 1.85789 osd.8 up 1.00000 1.00000
9 hdd 1.85789 osd.9 up 1.00000 1.00000
10 hdd 1.85789 osd.10 up 1.00000 1.00000
11 hdd 1.85789 osd.11 up 1.00000 1.00000
-7 7.43158 host osd03
12 hdd 1.85789 osd.12 up 1.00000 1.00000
13 hdd 1.85789 osd.13 up 1.00000 1.00000
14 hdd 1.85789 osd.14 up 1.00000 1.00000
15 hdd 1.85789 osd.15 up 1.00000 1.00000
-9 7.43158 host osd04
16 hdd 1.85789 osd.16 up 1.00000 1.00000
17 hdd 1.85789 osd.17 up 1.00000 1.00000
18 hdd 1.85789 osd.18 up 1.00000 1.00000
19 hdd 1.85789 osd.19 up 1.00000 1.00000
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com