Re: testing a crush rule against an out osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 2, 2015 at 4:23 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> On Wed, 2 Sep 2015, Dan van der Ster wrote:
>> On Wed, Sep 2, 2015 at 4:11 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>> > On Wed, 2 Sep 2015, Dan van der Ster wrote:
>> >> ...
>> >> Normally I use crushtool --test --show-mappings to test rules, but
>> >> AFAICT it doesn't let you simulate an out osd, i.e. with reweight = 0.
>> >> Any ideas how to test this situation without uploading a crushmap to a
>> >> running cluster?
>> >
>> > crushtool --test --weight <osdid> 0 ...
>> >
>>
>> Oh thanks :)
>>
>> I can't reproduce my real life issue with crushtool though. Still looking ...
>
> osdmaptool has a --test-map-pg option that may be easier...
>

alas I don't have the osdmap from when this happened :(

But anyway I finally managed to reproduce with crushtool:

# crushtool -i crush.map --num-rep 3 --test --show-mappings --weight
1008 0 --rule 4 --x 7357 2>&1
CRUSH rule 4 x 7357 [1048,889]

That's with these tunables:

tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

So I started tweaking and found that choose_local_tries has no effect,
choose_local_fallback_tries 1 fixes it, choose_total_tries up to 1000
has no effect, chooseleaf_descend_once 0 fixes it.

I don't really want to disable these "optimal" tunables; any other
advice what's might be going on here?

Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux