CRUSH map utilization issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,


I built a CRUSH map, with the goal to distinguish between SSD and HDD storage machines using only 1 root. The map can be found here: http://pastebin.com/VQdB0CE9


The issue I am having is this:


root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 3
rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 3..3
rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 0:    84/1024
rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 1:    437/1024
rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 2:    438/1024
rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 3:    65/1024


And then the same test using num-rep 46 (the lowest possible number that shows full utilization):


root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 46
rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 46..46
rule 0 (replicated_ruleset_ssd) num_rep 46 result size == 3:    1024/1024


Full output of above commands can be found here http://pastebin.com/2mbBnmSM and here http://pastebin.com/ar6SAFnX


The fact that amount of num-rep seems to scale with how many OSDs I am using, leads me to believe I am doing something wrong.


When using 2 roots (1 dedicated to SSD and 1 to HDD), everything works perfectly (example: http://pastebin.com/Uthxesut).


Would love to know what I am missing.


Thanks!


- Rob




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux