Re: How to see which crush tunables are active in a ceph-cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
for information for other cepher...

I switched from unknown crush tunables to firefly and it's takes 6 hour
(30.853% degration) to finisched on our production-cluster (5 Nodes, 60
OSDs, 10GBE, 20% data used:  pgmap v35678572: 3904 pgs, 4 pools, 21947
GB data, 5489 kobjects).

Should an "chooseleaf_vary_r 1" (from 0) take round about the same time
to finished??


Regards

Udo

On 04.12.2014 14:09, Udo Lembke wrote:
> Hi,
> to answer myself.
>
> With ceph osd crush show-tunables I see a little bit more, but doesn't
> know how far away from firefly-tunables I'm at the procuction cluster are.
>
> New testcluster with profile optimal:
> ceph osd crush show-tunables
> { "choose_local_tries": 0,
>   "choose_local_fallback_tries": 0,
>   "choose_total_tries": 50,
>   "chooseleaf_descend_once": 1,
>   "profile": "firefly",
>   "optimal_tunables": 1,
>   "legacy_tunables": 0,
>   "require_feature_tunables": 1,
>   "require_feature_tunables2": 1}
>
> the production cluster:
>  ceph osd crush show-tunables
> { "choose_local_tries": 0,
>   "choose_local_fallback_tries": 0,
>   "choose_total_tries": 50,
>   "chooseleaf_descend_once": 0,
>   "profile": "unknown",
>   "optimal_tunables": 0,
>   "legacy_tunables": 0,
>   "require_feature_tunables": 1,
>   "require_feature_tunables2": 0}
>
> Look this like argonaut or bobtail?
>
> And how proceed to update?
> Does in makes sense first go to profile bobtail and then to firefly?
>
>
> Regards
>
> Udo
>
> Am 01.12.2014 17:39, schrieb Udo Lembke:
>> Hi all,
>> http://ceph.com/docs/master/rados/operations/crush-map/#crush-tunables
>> described how to set the tunables to legacy, argonaut, bobtail, firefly
>> or optimal.
>>
>> But how can I see, which profile is active in an ceph-cluster?
>>
>> With "ceph osd getcrushmap" I got not realy much info
>> (only "tunable choose_local_tries 0
>> tunable choose_local_fallback_tries 0
>> tunable choose_total_tries 50)
>>
>>
>> Udo
>>
>> _
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux