crushmap question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



perhaps group sets of hosts into racks in crushmap. The crushmap doesn't 
have to strictly map the real world.

On 05/13/2014 08:52 AM, Cao, Buddy wrote:
>
> Hi,
>
> I have a crushmap structure likes root->rack->host->osds. I designed 
> the rule below, since I used "chooseleaf...rack" in rule definition, 
> if there is only one rack in the cluster, the ceph gps will always 
> stay at stuck unclean state (that is because the default 
> metadata/data/rbd pool set 2 replicas). Could you let me know how do I 
> configure the rule to let it can also work in a cluster with only one 
> rack?
>
> rule ssd{
>
>     ruleset 1
>
>     type replicated
>
>     min_size 0
>
>     max_size 10
>
>     step take root
>
>     step chooseleaf firstn 0 type rack
>
>     step emit
>
> }
>
> BTW, if I add a new rack into the crushmap, the pg status will finally 
> get to active+clean. However, my customer do ONLY have one rack in 
> their env, so hard for me to have workaround to ask him setup several 
> racks.
>
> Wei Cao (Buddy)
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140513/513f223a/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux