Sound good, but there is one problem. In my case I'll have as many hosts as pools used by some piece of software (via librados), and for performance purposes I want to put my primary osd for each pool on the same host as software. In that scenario I'll end up with as many new roots as hosts and I think it's not good idea. Anyway thanks for your response. On 29 July 2014 17:30, Gregory Farnum <greg at inktank.com> wrote: > You could create a new root bucket which contains hosts 2 and 3; then > use it instead of "default" in your special rule. That's probably what > you want anyway (rather than potentially having two copies of the data > on host 1). > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Tue, Jul 29, 2014 at 10:10 AM, Szymon Zacher <szzacher at gmail.com> wrote: >> I have 3 osd on 3 different hosts: host1 host2 and host3. I'm trying to >> force CRUSH to use osd on host1 as primary for one of my pools. I can't use >> primary-affinity because i don't want to set this osd as primary for all my >> pools. I try to create simple CRUSH rule, which should select osd on host1 >> first (as primary) and then get rest of acting set randomly: >> >> rule primary_on_host_1 { >> ruleset 1 >> type replicated >> min_size 1 >> max_size 10 >> step take host1 >> step choose firstn 1 type osd >> step emit >> step take default >> step chooseleaf firstn -1 type host >> step emit >> } >> >> unfortunately when CRUSH uses this rule sometimes returns duplicated osd in >> acting set like: >> >> CRUSH rule 1 x 28 [0,0,2] >> CRUSH rule 1 x 29 [0,2,0] >> CRUSH rule 1 x 30 [0,0,1] >> >> Is there any other way how can I force ceph to use one specific osd as >> primary? >> -- >> Szymon Zacher >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users at lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> -- Szymon Zacher