Crushmap ruleset for rack aware PG placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/09/14 17:28, Sage Weil wrote:
>
> rule myrule {
> 	ruleset 1
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take default
> 	step choose firstn 2 type rack
> 	step chooseleaf firstn 2 type host
> 	step emit
> }
>
> That will give you 4 osds, spread across 2 hosts in each rack.  The pool 
> size (replication factor) is 3, so RADOS will just use the first three (2 
> hosts in first rack, 1 host in second rack).

I have a similar requirement, where we currently have four nodes, two in
each fire zone, with pool size 3. At the moment, due to the number of
nodes, we are guaranteed at least one replica in each fire zone (which
we represent with bucket type "room"). If we add more nodes in future,
the current ruleset may cause all three replicas of a PG to land in a
single zone.

I tried the ruleset suggested above (replacing "rack" with "room"), but
when testing it with crushtool --test --show-utilization, I simply get
segfaults. No amount of fiddling around seems to make it work - even
adding two new hypothetical nodes to the crushmap doesn't help.

What could I perhaps be doing wrong?



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux