Re: setting crushmap while creating pool fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for the suggestion. I tried it out.

No effect.

My ceph.conf looks like:

[osd]
osd_pool_default_crush_replicated_ruleset = 2
osd_pool_default_size = 2
osd_pool_default_min_size = 1

The complete: http://pastebin.com/sG4cPYCY

But the config is completely ignored.

If i run

# ceph osd pool create vmware1 64 64 replicated cold-storage-rule

i will get:

pool 12 'vmware1' replicated size 3 min_size 2 crush_ruleset 1
object_hash rjenkins pg_num 64 pgp_num 64 last_change 2100 flags
hashpspool stripe_width 0

While the intresting part of my crushmap looks like:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

root ssd-cache {
        id -5           # do not change unnecessarily
        # weight 1.704
        alg straw
        hash 0  # rjenkins1
        item cephosd1-ssd-cache weight 0.852
        item cephosd2-ssd-cache weight 0.852
}
root cold-storage {
        id -6           # do not change unnecessarily
        # weight 51.432
        alg straw
        hash 0  # rjenkins1
        item cephosd1-cold-storage weight 25.716
        item cephosd2-cold-storage weight 25.716
}

# rules
rule ssd-cache-rule {
        ruleset 1
        type replicated
        min_size 2
        max_size 10
        step take ssd-cache
        step chooseleaf firstn 0 type host
        step emit
}
rule cold-storage-rule {
        ruleset 2
        type replicated
        min_size 2
        max_size 10
        step take cold-storage
        step chooseleaf firstn 0 type host
        step emit
}

-------------

I have no idea whats going wrong here.

I already opend a bug tracker:

http://tracker.ceph.com/issues/16653

But unfortunatelly without too much luck.

I really have no idea what to do now. I cant create pools and assign the
correct rulesets. Basically that means i have to resetup all. But there
is no gurantee that this will not happen again.

So my only option would be to make an additional ceph storage for other
pools, which is not really an option.

I deeply appriciate any kind of idea...

Thank you !


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 13.07.2016 um 08:18 schrieb Wido den Hollander:
> 
>> Op 12 juli 2016 om 22:30 schreef Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>:
>>
>>
>> Hi,
>>
>> i have a crushmap which looks like:
>>
>> http://pastebin.com/YC9FdTUd
>>
>> I issue:
>>
>> # ceph osd pool create vmware1 64 cold-storage-rule
>> pool 'vmware1' created
>>
>> I would expect the pool to have ruleset 2.
>>
>> #ceph osd pool ls detail
>>
>> pool 10 'vmware1' replicated size 3 min_size 2 crush_ruleset 1
>> object_hash rjenkins pg_num 64 pgp_num 64 last_change 483 flags
>> hashpspool stripe_width 0
>>
>> but it has crush_ruleset 1.
>>
>>
>> Why ?
> 
> What happens if you set 'osd_pool_default_crush_replicated_ruleset' to 2 and try again?
> 
> Should be set in the [global] or [mon] section.
> 
> Wido
> 
>>
>> Thank you !
>>
>>
>> -- 
>> Mit freundlichen Gruessen / Best regards
>>
>> Oliver Dzombic
>> IP-Interactive
>>
>> mailto:info@xxxxxxxxxxxxxxxxx
>>
>> Anschrift:
>>
>> IP Interactive UG ( haftungsbeschraenkt )
>> Zum Sonnenberg 1-3
>> 63571 Gelnhausen
>>
>> HRB 93402 beim Amtsgericht Hanau
>> Geschäftsführung: Oliver Dzombic
>>
>> Steuer Nr.: 35 236 3622 1
>> UST ID: DE274086107
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux