Re: Crush map example

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 20, 2013 at 5:06 PM, Darryl Bond <dbond@xxxxxxxxxxxxx> wrote:
> I have a cluster of 3 hosts each with 2 SSD and 4 Spinning disks.
> I used the example in th ecrush map doco to create a crush map to place
> the primary on the SSD and replica on spinning disk.
>
> If I use the example, I end up with objects replicated on the same host,
> if I use 2 replicas.
>
> Question 1, is the documentation on the rules correct, should they
> really be both ruleset 4 and why? I used ruleset 5 for the ssd-primary.
>       rule ssd {
>               ruleset 4
>               type replicated
>               min_size 0
>               max_size 10
>               step take ssd
>               step chooseleaf firstn 0 type host
>               step emit
>       }
>
>       rule ssd-primary {
>               ruleset 4
>               type replicated
>               min_size 0
>               max_size 10
>               step take ssd
>               step chooseleaf firstn 1 type host
>               step emit
>               step take platter
>               step chooseleaf firstn -1 type host
>               step emit
>       }

Hmm, no, those should both be different rulesets. You use the same
ruleset if you want to specify different placements depending on how
many replicas you're using on a particular pool (so that you could for
instance use the same ruleset for all your pools, but have higher
replication counts imply 2 SSD copies instead of just 1 or something).


> Question 2, Is there any way to ensure that the replicas are on
> different hosts when we use double rooted trees for the 2 technologies?
> Obviously, the simplest way is to have them on separate hosts.

Sadly, CRUSH doesn't support this kind of thing right now; if you want
to do it properly you should have different kinds of storage
segregated by host.
Extensions to CRUSH to enable this kind of behavior are on our list of
starter projects for interns and external contributors, and we push it
from time to time, so this could be coming in the future — just don't
count on it by any particular date. :)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux