On 15/09/14 17:28, Sage Weil wrote: > > rule myrule { > ruleset 1 > type replicated > min_size 1 > max_size 10 > step take default > step choose firstn 2 type rack > step chooseleaf firstn 2 type host > step emit > } > > That will give you 4 osds, spread across 2 hosts in each rack. The pool > size (replication factor) is 3, so RADOS will just use the first three (2 > hosts in first rack, 1 host in second rack). I have a similar requirement, where we currently have four nodes, two in each fire zone, with pool size 3. At the moment, due to the number of nodes, we are guaranteed at least one replica in each fire zone (which we represent with bucket type "room"). If we add more nodes in future, the current ruleset may cause all three replicas of a PG to land in a single zone. I tried the ruleset suggested above (replacing "rack" with "room"), but when testing it with crushtool --test --show-utilization, I simply get segfaults. No amount of fiddling around seems to make it work - even adding two new hypothetical nodes to the crushmap doesn't help. What could I perhaps be doing wrong? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html