Crushmap ruleset for rack aware PG placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Daniel, 
          Can you provide your exact crush map and exact crushtool command
that results in segfaults?

Johnu

On 9/16/14, 10:23 AM, "Daniel Swarbrick"
<daniel.swarbrick at profitbricks.com> wrote:

>Replying to myself, and for the benefit of other caffeine-starved people:
>
>Setting the last rule to "chooseleaf firstn 0" does not generate the
>desired results, and ends up sometimes putting all replicas in the same
>zone.
>
>I'm slowly getting the hang of customised crushmaps ;-)
>
>On 16/09/14 18:39, Daniel Swarbrick wrote:
>> 
>> One other area I wasn't sure about - can the final "chooseleaf" step
>> specify "firstn 0" for simplicity's sake (and to automatically handle a
>> larger pool size in future) ? Would there be any downside to this?
>
>
>> 
>> Cheers
>> 
>> On 16/09/14 16:20, Loic Dachary wrote:
>>> Hi Daniel,
>>>
>>> When I run
>>>
>>> crushtool --outfn crushmap --build --num_osds 100 host straw 2 rack
>>>straw 10 default straw 0
>>> crushtool -d crushmap -o crushmap.txt
>>> cat >> crushmap.txt <<EOF
>>> rule myrule {
>>> 	ruleset 1
>>> 	type replicated
>>> 	min_size 1
>>> 	max_size 10
>>> 	step take default
>>> 	step choose firstn 2 type rack
>>> 	step chooseleaf firstn 2 type host
>>> 	step emit
>>> }
>>> EOF
>
>
>_______________________________________________
>ceph-users mailing list
>ceph-users at lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux