crush rule for 4 copy over 3 failure domains?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear ceph users,

Since recently we have 3 locations with ceph osd nodes, for 3 copy pools, it is trivial to create a crush rule that uses all 3 datacenters for each block, but 4 copy is harder. Our current "replicated" rule is this:

rule replicated_rule {
    id 0
    type replicated
    min_size 2
    max_size 4
    step take default
    step choose firstn 2 type datacenter
    step chooseleaf firstn 2 type host
    step emit
}

For 3 copy, the rule would be

rule replicated_rule_3copy {
    id 5
    type replicated
    min_size 2
    max_size 3
    step take default
    step choose firstn 3 type datacenter
    step chooseleaf firstn 1 type host
    step emit
}

But 4 copy requires an additional osd, so how to tell the crush algorithm to first take one from each datacenter and then take one more from any datacenter?

I'd be interested to know if this is possible and if so, how...

Having said that, I don't think there's much additional value for a 4 copy pool, compared to a 3copy pool with 3 separate locations. Or is there (apart from the 1 more copy thing in general)?

Cheers

/Simon
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux