Re: Dev Meeting followup on compression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Core standup would be best.  I'll send you an invite.
-Sam

On Tue, Feb 9, 2016 at 5:24 AM, Igor Fedotov <ifedotov@xxxxxxxxxxxx> wrote:
> Ilya,
>
> please find my comments inline.
>
>
> On 08.02.2016 22:23, Ilya Dryomov wrote:
>>
>> On Mon, Feb 8, 2016 at 5:34 PM, Igor Fedotov <ifedotov@xxxxxxxxxxxx>
>> wrote:
>>>
>>> Guys,
>>>
>>> let me summarize what we decided regarding compression support in Ceph
>>> during the Dev Meeting last week.
>>>
>>> Below are possible implementation options, their pros/cons and the
>>> conclusion.
>>>
>>> 1) Add compression support to RGW.
>>> Pros/Cons:
>>> + Simple
>>> + Reduced inter-component traffic
>>> - Limited to specific clients
>>> - Will conflict with partial read/writes if any appear
>>>
>>> Alyona Kiseleva from Mirantis (akyseleva@xxxxxxxxxxxx) will start
>>> implementing this promptly. You can ask additional questions to her via
>>> e-mail or during daily RGW stendups she is planning to attend regularly.
>>>
>>> 2) Add basic compression support to BlueStore. Basic = "Append only"
>>> functionality to be implemented. Specific "append only" hint/flag needs
>>> to
>>> be introduced for object creation interface.
>>>
>>> Pros/Cons:
>>> + Moderate complexity
>>> + Suits for any client/PG backend
>>> + Good isolation from other Ceph components
>>> - Limited applicability
>>> - additional 50-200% CPU load for the cluster since we compress each
>>> replica/EC shard independently
>>> - no inter-component traffic saving
>>> - recovery procedure requires decompress/recompress sequence
>>
>> This is for EC pools only, right?  Can you elaborate on this bullet?
>
> That's for any pool type. When compression takes place at object store level
> and recovery is performed at OSD(PGBackend instance) you need to retrieve an
> object replica from the store ( and hence decompress it ) and subsequently
> compress it back when saving at a new store .
> Contrary having compress/decompress at PGBackend allows to bypass such
> decompress/compress overhead.
>
>>> Mirantis ( me specifically ) will start blueprint/POC creation for this
>>> promptly. I'm planning to attend daily RBD syncup regularly to inform on
>>> the
>>> progress.
>>
>> Core standup and/or the new EC-overwrite meeting Sam is planning on
>> holding is probably a better place for this.
>
> Where can I find a schedule for these meetings?
>
>> Thanks,
>>
>>                  Ilya
>
> Regards,
> Igor
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux