Re: Using two roots for the same pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



George,

Check the instructions here which should allow you to test your crush rules without applying them to your cluster.
http://dachary.org/?p=3189

also, fwiw, we are not using an 'emit' after each choose (note these rules are not implementing what you're trying to)-
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type room
step chooseleaf firstn 2 type host
step emit
}
rule ssd {
ruleset 1
type replicated
min_size 1
max_size 4
step take ssd
step choose firstn 2 type room
step chooseleaf firstn 2 type host
step emit
}

Bob

On Mon, Jul 11, 2016 at 9:19 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
I'm not looking at the docs, but I think you need an "emit" statement after every choose.
-Greg


On Monday, July 11, 2016, George Shuklin <george.shuklin@xxxxxxxxx> wrote:
Hello.

I want to try CRUSH rule with following idea:
take one OSD from root with SSD drives (and use it as primary).
take two OSD from root with HDD drives.

I've created this rule:

rule rule_mix {
        ruleset 2
        type replicated
        min_size 2
        max_size 10
        step take ssd
        step chooseleaf firstn 1 type osd
        step take hdd
        step chooseleaf firstn -1 type osd
        step emit
}

But I think I done something wrong - all PG are undersized+degraded (I use 'size 3', have 2 SSD OSD and 5 HDD OSD).

My noobie questions:

1) Can I use multiple 'take' steps in the single rule?
2) How many emit I should/may use per rule?
3) Is this a proper way to describe such logic? Or it should be done differently? (How?)

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux