crushmap question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It won't pay any attention to the racks after you change the rule. So
some PGs may have all their OSDs in one rack, and others may be spread
across racks.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, May 13, 2014 at 10:54 PM, Cao, Buddy <buddy.cao at intel.com> wrote:
> BTW, I'd like to know, after I change the "from rack" to "from host", if I add more racks with host/osds in the cluster, will ceph choose the osds for pg only from one zone? or ceph will randomly choose from several different zones?
>
>
> Wei Cao (Buddy)
>
> -----Original Message-----
> From: Cao, Buddy
> Sent: Wednesday, May 14, 2014 1:30 PM
> To: 'Gregory Farnum'
> Cc: ceph-users at lists.ceph.com
> Subject: RE: crushmap question
>
> Thanks Gregory so much?it solved the problem!
>
>
> Wei Cao (Buddy)
>
> -----Original Message-----
> From: Gregory Farnum [mailto:greg at inktank.com]
> Sent: Wednesday, May 14, 2014 2:00 AM
> To: Cao, Buddy
> Cc: ceph-users at lists.ceph.com
> Subject: Re: crushmap question
>
> You just use a type other than "rack" in your chooseleaf rule. In your case, "host". When using chooseleaf, the bucket type you specify is the failure domain which it must segregate across.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, May 13, 2014 at 12:52 AM, Cao, Buddy <buddy.cao at intel.com> wrote:
>> Hi,
>>
>>
>>
>> I have a crushmap structure likes root->rack->host->osds. I designed
>> the rule below, since I used ?chooseleaf?rack? in rule definition, if
>> there is only one rack in the cluster, the ceph gps will always stay
>> at stuck unclean state (that is because the default metadata/data/rbd pool set 2 replicas).
>> Could you let me know how do I configure the rule to let it can also
>> work in a cluster with only one rack?
>>
>>
>>
>> rule ssd{
>>
>>     ruleset 1
>>
>>     type replicated
>>
>>     min_size 0
>>
>>     max_size 10
>>
>>     step take root
>>
>>     step chooseleaf firstn 0 type rack
>>
>>     step emit
>>
>> }
>>
>>
>>
>> BTW, if I add a new rack into the crushmap, the pg status will finally
>> get to active+clean. However, my customer do ONLY have one rack in
>> their env, so hard for me to have workaround to ask him setup several racks.
>>
>>
>>
>> Wei Cao (Buddy)
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux