Re: [ceph-users] erasure pool & crush ruleset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 19/06/2014 18:33, Pavel V. Kaygorodov wrote:
> 
> This ruleset works well for replicated pools with size 6 (I have tested it on data and metadata pools, which I cannot delete). 
> The erasure pool with k=3 and m=3 must have size 6?

Yes. However, the algorithm is slightly different and depending on the number of devices you actually have, it may fail where a replicated ruleset works.

> 
> Pavel.
> 
>> On 19/06/2014 18:17, Pavel V. Kaygorodov wrote:
>>> Hi!
>>>
>>> I want to make erasure-coded pool with k=3 and m=3. Also, I want to distribute data between two hosts, having 3 osd from host1 and 3 from host2.
>>> I have created a ruleset:
>>>
>>> rule ruleset_3_3 {
>>>        ruleset 0
>>>        type replicated

You need:

type erasure

>>>        min_size 6
>>>        max_size 6
>>>        step take host1
>>>        step chooseleaf firstn 3 type osd
>>>        step emit
>>>        step take host2
>>>        step chooseleaf firstn 3 type osd
>>>        step emit
>>> }

Cheers

>> Hi,
>>
>> I suggest you test the ruleset created with crushtool to check if what comes out of it is what you expect. It's quite convenient to use multiple of 10 to visually match the result. For instance 
>>
>>    crushtool -o /tmp/t.map --num_osds 500 --build node straw 10 datacenter straw 10 root straw 0
>>
>> then you can
>>
>>    crushtool -c /tmp/t.txt -o /tmp/t.map ; crushtool -i /tmp/t.map --show-bad-mappings --show-statistics --test --rule 1 --x 1 --num-rep 12
>>
>> This is the general idea and you can find details about this in the crushtool help and the test scripts at
>>
>>    https://github.com/ceph/ceph/tree/master/src/test/cli/crushtool
>>
>> for instance
>>
>>    https://github.com/ceph/ceph/blob/master/src/test/cli/crushtool/bad-mappings.t
>>
>> which shows what happens when there is a "bad mapping", i.e. the crushmap could not be used to get the number of OSD you want. This is most probably why pg get stuck.
>>
>> Cheers
>>
>>> I have created an erasure code profile:
>>>
>>> ceph osd erasure-code-profile set def33 k=3 m=3
>>>
>>> I have created a pool:
>>>
>>> ceph osd pool create images 2048 2048 erasure def33 ruleset_3_3
>>>
>>> Now I see 2048 pgs permanently in "creating" state.
>>>
>>> What is wrong?
>>>
>>> Pavel.
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> -- 
>> Loïc Dachary, Artisan Logiciel Libre
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux