Re: Missing OSD in up set

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frank, I checked the first hypothesis, and I found something strange. This is the decompiled rule:

rule wizard_data {
        id 1
        type erasure
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}

As you see it already contains step set_choose_tries 100. But when I test it with crushtool I get:

# crushtool -i crush.map --test --show-bad-mappings --rule 1 --num-rep 8 --min-x 1 --max-x 1000 --show-choose-tries
bad mapping rule 1 x 319 num_rep 8 result [43,40,58,69,2147483647,21,11,31]
bad mapping rule 1 x 542 num_rep 8 result [50,75,53,55,66,43,61,2147483647]
bad mapping rule 1 x 721 num_rep 8 result [35,59,72,24,23,41,2147483647,15]
 0:         0
 1:      7999
 2:        12
 3:        26
 4:        36
 5:        44
 6:        52
. . .
48:         3
49:         4

As far as I understand the crushtool output, the maximum number of tries is 49 < 100 so I should get no bad mapping. But maybe the bad mappings are not considered in the tries count, so I tried to increase set_choose_tries setting it to 1000. This seems to fix the bad mappings, but the maximum number of tries does not change:

# crushtool -i better-crush.map --test --show-bad-mappings --rule 1 --num-rep 8 --min-x 1 --max-x 1000 --show-choose-tries
 0:         0
 1:      8002
 2:        12
 3:        26
 4:        36
 5:        44
 6:        52
. . .
48:         3
49:         4

I get 3 more PG placed with 1 try, which I guess might be the ones having bad mappings when using set_choose_tries 100. If I'm correct then I really don't understand why they need just one try with set_choose_tries 1000 but fail with set_choose_tries 100...

Anyway, do you think it would be worth trying set_choose_tries 1000 in production?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux