Help with CRUSH maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello to all,

Here is my ceph osd tree output :


# id    weight  type name       up/down reweight
-1      20      root example
-12     20              drive ssd
-22     20                      datacenter ssd-dc1
-104    10                              room ssd-dc1-A
-502    10                                      host A-ceph-osd-1
0       1                                               osd.0   up      1
                                                        ...
9       1                                               osd.9   up      1
-103    10                              room ssd-dc1-B
-501    10                                      host B-ceph-osd-1
10      1                                               osd.10  up      1
                                                        ...
19      1                                               osd.19  up      1
-11     0               drive hdd
-21     0                       datacenter hdd-dc1
-102    0                               room hdd-dc1-A
-503    0                                       host A-ceph-osd-2
20      0                                               osd.20  up      1
                                                        ...
27      0                                               osd.27  up      1
-505    0                                       host A-ceph-osd-3
35      0                                               osd.35  up      1
                                                        ...
42      0                                               osd.42  up      1
-101    0                               room hdd-dc1-B
-504    0                                       host B-ceph-osd-2
28      0                                               osd.28  up      1
                                                        ...
34      0                                               osd.34  up      1
-506    0                                       host B-ceph-osd-3
43      0                                               osd.43  up      1
                                                        ...
50      0                                               osd.50  up      1





So, in a word,
I've 2 sorts of OSD hosts : full SSD, and full standard HDD.
Hosts are splited in 2 rooms : A and B.

I've created 2 pools :
vm-sdd and vm-hdd

My goal is this one :
for pool vm-sdd, replicate data between room A and B on SSD hosts, so
here between A-ceph-osd-1 and B-ceph-osd-1

for pool vm-hdd, to the same, but only with HDD hosts.

Here is my rules in my crush map :

rule vm-sdd {
        ruleset 3
        type replicated
        min_size 1
        max_size 10
        step take example
        step emit
        step take ssd
        step chooseleaf firstn 0 type room
        step emit
}
rule vm-hdd {
        ruleset 4
        type replicated
        min_size 1
        max_size 10
        step take example
        step emit
        step take hdd
        step chooseleaf firstn 0 type room
        step emit
}


But  but... when i do :
ceph osd pool set vm-hddd crush_ruleset 4

it does not work, i get all my PGs stuck.
Do I have missed something ?

Do you have an idea ?
Thanks a lot for your help.






Best Regards - Cordialement
Alexis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux