You may want to change value of "osd_pool_default_crush_replicated_ruleset".
shinobu
On Fri, Jul 15, 2016 at 7:38 AM, Oliver Dzombic <info@xxxxxxxxxxxxxxxxx> wrote:
Hi,
wow, figured it out.
If you dont have a ruleset 0 id, you are in trouble.
So the solution is, that you >MUST< have a ruleset id 0.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:info@xxxxxxxxxxxxxxxxx
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
Steuer Nr.: 35 236 3622 1
UST ID: DE274086107
Am 15.07.2016 um 00:10 schrieb Oliver Dzombic:
> Hi,
>
> thanks for the suggestion. I tried it out.
>
> No effect.
>
> My ceph.conf looks like:
>
> [osd]
> osd_pool_default_crush_replicated_ruleset = 2
> osd_pool_default_size = 2
> osd_pool_default_min_size = 1
>
> The complete: http://pastebin.com/sG4cPYCY
>
> But the config is completely ignored.
>
> If i run
>
> # ceph osd pool create vmware1 64 64 replicated cold-storage-rule
>
> i will get:
>
> pool 12 'vmware1' replicated size 3 min_size 2 crush_ruleset 1
> object_hash rjenkins pg_num 64 pgp_num 64 last_change 2100 flags
> hashpspool stripe_width 0
>
> While the intresting part of my crushmap looks like:
>
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> tunable chooseleaf_vary_r 1
> tunable straw_calc_version 1
>
> root ssd-cache {
> id -5 # do not change unnecessarily
> # weight 1.704
> alg straw
> hash 0 # rjenkins1
> item cephosd1-ssd-cache weight 0.852
> item cephosd2-ssd-cache weight 0.852
> }
> root cold-storage {
> id -6 # do not change unnecessarily
> # weight 51.432
> alg straw
> hash 0 # rjenkins1
> item cephosd1-cold-storage weight 25.716
> item cephosd2-cold-storage weight 25.716
> }
>
> # rules
> rule ssd-cache-rule {
> ruleset 1
> type replicated
> min_size 2
> max_size 10
> step take ssd-cache
> step chooseleaf firstn 0 type host
> step emit
> }
> rule cold-storage-rule {
> ruleset 2
> type replicated
> min_size 2
> max_size 10
> step take cold-storage
> step chooseleaf firstn 0 type host
> step emit
> }
>
> -------------
>
> I have no idea whats going wrong here.
>
> I already opend a bug tracker:
>
> http://tracker.ceph.com/issues/16653
>
> But unfortunatelly without too much luck.
>
> I really have no idea what to do now. I cant create pools and assign the
> correct rulesets. Basically that means i have to resetup all. But there
> is no gurantee that this will not happen again.
>
> So my only option would be to make an additional ceph storage for other
> pools, which is not really an option.
>
> I deeply appriciate any kind of idea...
>
> Thank you !
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com