Re: Fwd: S3 API Compatibility support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Yes, we can confiure ceph to use 2x replicas, which will look like
>> reduced redundancy, but AWS uses a separate RRS storage-low cost
>> (instead of
>> standard) storage for this purpose. I am checking, if we could
>> similarly in ceph too.

>You can use multiple placement targets and can specify on bucket
>creation which placement target to use. At this time we don't support
>the exact S3 reduced redundancy fields, although it should be pretty
>easy to add.

OK.  Could you please guide us to implement the above.

>>>What isn't currently supported is the ability to reduce the redundancy of
>>>individual objects in a bucket.  I don't think there is anything
>>>architecturally preventing that, but it is not implemented or supported.
>>
>> OK. Do we have the issue id for the above? Else, we can file one. Please advise.

>I think #8929 would cover it.

The above issue for bucket lifecyle, but we are storing the
object/buckets on a separate
disk like RRS type. I think, its needed to support an object to be
stored based on the
option (default on standard storage).


Thanks
Swami

On Fri, Sep 19, 2014 at 10:15 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
> On Fri, Sep 19, 2014 at 8:32 AM, M Ranga Swami Reddy
> <swamireddy@xxxxxxxxx> wrote:
>> Hi Sage,
>> Thanks for quick reply.
>>
>>>Ceph doesn't interact at all with AWS services like Glacier, if that's
>>
>> No. I meant that - Ceph interaction with a glacier and RRS type of
>> storages along with currently used OSD (or standard storage).
>>
>>>what you mean.
>>>For RRS, though, I assume you mean the ability to create buckets with
>>>reduced redundancy with radosgw?  That is supported, although not quite
>>>the way AWS does it.  You can create different pools that back RGW
>>>buckets, and each bucket is stored in one of those pools.  So you could
>>>make one of them 2x instead of 3x, or use an erasure code of your choice.
>>
>> Yes, we can confiure ceph to use 2x replicas, which will look like
>> reduced redundancy, but AWS uses a separate RRS storage-low cost
>> (instead of
>> standard) storage for this purpose. I am checking, if we could
>> similarly in ceph too.
>
> You can use multiple placement targets and can specify on bucket
> creation which placement target to use. At this time we don't support
> the exact S3 reduced redundancy fields, although it should be pretty
> easy to add.
>
>>
>>>What isn't currently supported is the ability to reduce the redundancy of
>>>individual objects in a bucket.  I don't think there is anything
>>>architecturally preventing that, but it is not implemented or supported.
>>
>> OK. Do we have the issue id for the above? Else, we can file one. Please advise.
>
> I think #8929 would cover it.
>
> Yehuda
>
>>
>>>When we look at the S3 archival features in more detail (soon!) I'm sure
>>>this will come up!  The current plan is to address object versioning
>>>first.  That is, unless a developer surfaces who wants to start hacking on
>>>this right away...
>>
>> Great to know this. Even we are keen with S3 support in Ceph and we
>> are happy support you here.
>>
>> Thanks
>> Swami
>>
>> On Fri, Sep 19, 2014 at 11:08 AM, Sage Weil <sweil@xxxxxxxxxx> wrote:
>>> On Fri, 19 Sep 2014, M Ranga Swami Reddy wrote:
>>>> Hi Sage,
>>>> Could you please advise, if Ceph support the low cost object
>>>> storages(like Amazon Glacier or RRS) for archiving objects like log
>>>> file etc.?
>>>
>>> Ceph doesn't interact at all with AWS services like Glacier, if that's
>>> what you mean.
>>>
>>> For RRS, though, I assume you mean the ability to create buckets with
>>> reduced redundancy with radosgw?  That is supported, although not quite
>>> the way AWS does it.  You can create different pools that back RGW
>>> buckets, and each bucket is stored in one of those pools.  So you could
>>> make one of them 2x instead of 3x, or use an erasure code of your choice.
>>>
>>> What isn't currently supported is the ability to reduce the redundancy of
>>> individual objects in a bucket.  I don't think there is anything
>>> architecturally preventing that, but it is not implemented or supported.
>>>
>>> When we look at the S3 archival features in more detail (soon!) I'm sure
>>> this will come up!  The current plan is to address object versioning
>>> first.  That is, unless a developer surfaces who wants to start hacking on
>>> this right away...
>>>
>>> sage
>>>
>>>
>>>
>>>>
>>>> Thanks
>>>> Swami
>>>>
>>>> On Thu, Sep 18, 2014 at 6:20 PM, M Ranga Swami Reddy
>>>> <swamireddy@xxxxxxxxx> wrote:
>>>> > Hi ,
>>>> >
>>>> > Could you please check and clarify the below question on object
>>>> > lifecycle and notification S3 APIs support:
>>>> >
>>>> > 1. To support the bucket lifecycle - we need to support the
>>>> > moving/deleting the objects/buckets based lifecycle settings.
>>>> > For ex: If an object lifecyle set as below:
>>>> >           1. Archive it after 10 days - means move this object to low
>>>> > cost object storage after 10 days of the creation date.
>>>> >            2. Remove this object after 90days - mean remove this
>>>> > object from the low cost object after 90days of creation date.
>>>> >
>>>> > Q1- Does the ceph support the above concept like moving to low cost
>>>> > storage and delete from that storage?
>>>> >
>>>> > 2. To support the object notifications:
>>>> >       - First there should be low cost and high availability storage
>>>> > with single replica only. If an object created with this type of
>>>> > object storage,
>>>> >         There could be chances that object could lose, so if an object
>>>> > of this type of storage lost, set the notifications.
>>>> >
>>>> > Q2- Does Ceph support low cost and high availability storage type?
>>>> >
>>>> > Thanks
>>>> >
>>>> > On Fri, Sep 12, 2014 at 8:00 PM, M Ranga Swami Reddy
>>>> > <swamireddy@xxxxxxxxx> wrote:
>>>> >> Hi Yehuda,
>>>> >>
>>>> >> Could you please check and clarify the below question on object
>>>> >> lifecycle and notification S3 APIs support:
>>>> >>
>>>> >> 1. To support the bucket lifecycle - we need to support the
>>>> >> moving/deleting the objects/buckets based lifecycle settings.
>>>> >> For ex: If an object lifecyle set as below:
>>>> >>           1. Archive it after 10 days - means move this object to low
>>>> >> cost object storage after 10 days of the creation date.
>>>> >>            2. Remove this object after 90days - mean remove this
>>>> >> object from the low cost object after 90days of creation date.
>>>> >>
>>>> >> Q1- Does the ceph support the above concept like moving to low cost
>>>> >> storage and delete from that storage?
>>>> >>
>>>> >> 2. To support the object notifications:
>>>> >>       - First there should be low cost and high availability storage
>>>> >> with single replica only. If an object created with this type of
>>>> >> object storage,
>>>> >>         There could be chances that object could lose, so if an object
>>>> >> of this type of storage lost, set the notifications.
>>>> >>
>>>> >> Q2- Does Ceph support low cost and high availability storage type?
>>>> >>
>>>> >> Thanks
>>>> >> Swami
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Jul 29, 2014 at 1:35 AM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
>>>> >>> Bucket lifecycle:
>>>> >>> http://tracker.ceph.com/issues/8929
>>>> >>>
>>>> >>> Bucket notification:
>>>> >>> http://tracker.ceph.com/issues/8956
>>>> >>>
>>>> >>> On Sun, Jul 27, 2014 at 12:54 AM, M Ranga Swami Reddy
>>>> >>> <swamireddy@xxxxxxxxx> wrote:
>>>> >>>> Good no know the details. Can you please share the issue ID for bucket
>>>> >>>> lifecycle? My team also could start help here.
>>>> >>>> Regarding the notification - Do we have the issue ID?
>>>> >>>> Yes, the object versioning will be backlog one - I strongly feel we
>>>> >>>> start working on this asap.
>>>> >>>>
>>>> >>>> Thanks
>>>> >>>> Swami
>>>> >>>>
>>>> >>>> On Fri, Jul 25, 2014 at 11:31 PM, Yehuda Sadeh <yehuda@xxxxxxxxxx> wrote:
>>>> >>>>> On Fri, Jul 25, 2014 at 10:14 AM, M Ranga Swami Reddy
>>>> >>>>> <swamireddy@xxxxxxxxx> wrote:
>>>> >>>>>> Thanks for quick reply.
>>>> >>>>>> Yes,  versioned object - missing in ceph ATM
>>>> >>>>>> Iam looking for: bucket lifecylce (get/put/delete), bucket location,
>>>> >>>>>> put object notification and object restore (ie versioned object) S3
>>>> >>>>>> API support.
>>>> >>>>>> Please let me now any of the above work is in progress or some one
>>>> >>>>>> planned to work on.
>>>> >>>>>
>>>> >>>>>
>>>> >>>>> I opened an issue for bucket lifecycle (we already had an issue open
>>>> >>>>> for object expiration though). We do have bucket location already
>>>> >>>>> (part of the multi-region feature). Object versioning is definitely on
>>>> >>>>> our backlog and one that we'll hopefully implement sooner rather
>>>> >>>>> later.
>>>> >>>>> With regard to object notification, it'll require having a
>>>> >>>>> notification service which is a bit out of the scope. Integrating the
>>>> >>>>> gateway with such a service whouldn't be hard, but we'll need to have
>>>> >>>>> that first.
>>>> >>>>>
>>>> >>>>> Yehuda
>>>> >>>>>
>>>> >>>>>>
>>>> >>>>>> Thanks
>>>> >>>>>> Swami
>>>> >>>>>>
>>>> >>>>>>
>>>> >>>>>> On Fri, Jul 25, 2014 at 9:19 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
>>>> >>>>>>> On Fri, 25 Jul 2014, M Ranga Swami Reddy wrote:
>>>> >>>>>>>> Hi Team: As per the ceph document a few S3 APIs compatibility not supported.
>>>> >>>>>>>>
>>>> >>>>>>>> Link: http://ceph.com/docs/master/radosgw/s3/
>>>> >>>>>>>>
>>>> >>>>>>>> Is there plan to support the ?n supported item in the above table?
>>>> >>>>>>>> or
>>>> >>>>>>>> Any working on this?
>>>> >>>>>>>
>>>> >>>>>>> Yes.  Unfortunately this table isn't particularly detailed or accurate or
>>>> >>>>>>> up to date.   The main gap, I think, is versioned objects.
>>>> >>>>>>>
>>>> >>>>>>> Are there specfiic parts of the S3 API that are missing that you need?
>>>> >>>>>>> That sort of info is very helpful for prioritizing effort...
>>>> >>>>>>>
>>>> >>>>>>> sage
>>>> >>>>>> --
>>>> >>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> >>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> >>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux