Re: Erasure code properties in OSDMap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, in chatting about this I've been convinced that it's legitimately
separate, because the CRUSH ruleset is mutable during the lifetime of
a pool but the EC settings are not.  I suppose the way we could
explain the logical separation for users is to say that the CRUSH
ruleset is mainly about location selection, whereas the EC settings
tell you about encoding within those locations.

Can we call this something more descriptive like "EC profile" to avoid
confusion?  "properties" is very generic.

Cheers,
John



On Wed, Mar 12, 2014 at 1:10 PM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
> On 12/03/2014 13:39, John Spray wrote:
>> I am sure all of that will work, but it doesn't explain why these
>> properties must be stored and named separately to crush rulesets.  To
>> flesh this out one also needs "get" and "list" operations for the sets
>> of properties, which feels like overkill if there is an existing place
>> we could be storing these things.  The reason I'm taking such an
>> interest in what may seem something minor is that once this has been
>> added, we will be stuck with it for some time once external tools
>> start depending on the interface.
>>
>> The ruleset-based approach doesn't have to be more complicated for CLI
>> users, we would essentially replace any "myproperties" above with a
>> ruleset name instead.
>>
>> osd pool create mypool <pgnum> <pgpnum> <ruleset>
>> osd set ruleset-properties <ruleset> <key>=<val> [<key>=<val>...]
>>
>> The simple default cases of "pool create mypool <pgnum> <pgpnum>
>> erasure" could be handled by making sure there exist default rulesets
>> called "erasure" and "replicated" rather than having these be magic
>> words to the commands that cause ruleset creation.  Rulesets currently
>> just have numbers instead of names, but it would be simpler to add
>> names to rulesets than to introduce a whole new type of object to the
>> interface.
> Here are the default parameters
>
> OPTION(osd_pool_default_erasure_code_properties,
>        OPT_STR,
>        "erasure-code-plugin=jerasure "
>        "erasure-code-technique=reed_sol_van "
>        "erasure-code-k=4 "
>        "erasure-code-m=2 "
>        ) // default properties of osd pool create
>
> The k and m parameters have a clear relationship with the pool size. And they also define the minimum number of items the crush ruleset must be able to provide. The other parameters relate to the code/decode functions and are better understood in the context of the OSD than crush. This is the reason why I don't see these properties as being exclusively linked to the crush ruleset or the OSD. By introducing a new set of properties associated to the erasure code feature there is no need to chose.
>
> Does that make sense ?
>>
>> John
>>
>> On Tue, Mar 11, 2014 at 2:03 PM, Loic Dachary
>> <loic.dachary@xxxxxxxxxxxxx> wrote:
>>> On 11/03/2014 13:21, John Spray wrote:
>>>> From a high level view, what is the logical difference between the
>>>> crush ruleset and the properties object?  I'm thinking about how this
>>>> is exposed to users and tools, and it seems like both would be defined
>>>> as "the settings about data placement and encoding".  I certainly
>>>> understand the separation internally, I am just concerned about making
>>>> the interface we expose upwards more complicated by adding a new type
>>>> of object.
>>>>
>>>> Is there really a need for a new type of properties object, instead of
>>>> storing these properties under the existing ruleset ID?
>>> These properties are used to configure the new feature that was introduced in Firefly : erasure coded pools. From a user point of view the simplest would be to
>>>
>>> ceph osd pool create mypool erasure
>>>
>>> and rely on the fact that a default ruleset will be created using the default erasure code plugin with the default parameters.
>>>
>>> If the sysadmin wants to tweak the K+M factors (s)he could:
>>>
>>> ceph osd set properties myproperties k=10 m=4
>>>
>>> and then
>>>
>>> ceph osd pool create mypool erasure myproperties
>>>
>>> which would implicitly ask the default erasure code plugin to create a ruleset named "mypool-ruleset" after configuring it with myproperties.
>>>
>>> If the sysadmin wants to share rulesets between pools instead of relying on their implicit creation, (s)he could
>>>
>>> ceph osd create-serasure myruleset myproperties
>>>
>>> and then ceph osd set crush_ruleset as per usual. And if (s)he really wants fine tuning, manually adding the ruleset is also possible.
>>>
>>> I feel confortable explaining this but I'm probably much too familiar with the subject to be a good judge of what makes sense to someone new or not ;-)
>>>
>>> Cheers
>>>
>>>> John
>>>>
>>>>
>>>> On Sun, Mar 9, 2014 at 12:13 PM, Loic Dachary
>>>> <loic.dachary@xxxxxxxxxxxxx> wrote:
>>>>> Hi Sage & Sam,
>>>>>
>>>>> I quickly sketched the replacement of the pg_pool_t::properties map with a OSDMap::properties list of maps at https://github.com/dachary/ceph/commit/fe3819a62eb139fc3f0fa4282b4d22aecd8cd398 and explained how I see it at http://tracker.ceph.com/issues/7662#note-2
>>>>>
>>>>> It indeed makes things simpler, more consistent and easier to explain. I can provide an implementation this week if this seems reasonable to you.
>>>>>
>>>>> Cheers
>>>>>
>>>>> --
>>>>> Loďc Dachary, Senior Developer
>>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> Loïc Dachary, Senior Developer
>>>
>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux