Re: Bucket lifecycle (object expiration)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 9, 2015 at 7:33 AM, Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx> wrote:
>
>
> ----- Original Message -----
>> From: "Sage Weil" <sage@xxxxxxxxxxxx>
>> To: "Yehuda Sadeh-Weinraub" <yehuda@xxxxxxxxxx>
>> Cc: "Ceph Development" <ceph-devel@xxxxxxxxxxxxxxx>
>> Sent: Monday, February 9, 2015 3:42:40 AM
>> Subject: Re: Bucket lifecycle (object expiration)
>>
>> On Fri, 6 Feb 2015, Yehuda Sadeh-Weinraub wrote:
>> > I have been recently looking at implementing object expiration in rgw.
>> > First, a brief description of the feature:
>> >
>> > S3 provides mechanisms to expire objects, and/or to transition them into
>> > different storage class. The feature works at the bucket level. Rules can
>> > be set as to which objects will expire and/or transitioned , and when.
>> > Objects are specified by using prefixes, the configuration is not
>> > per-object. Time is set in days (since object creation), and events are
>> > always rounded to the start of the next day.
>> > The rules can also work in conjuction with object versioning. When a
>> > versioned object (a current object) expires, a delete marker is created.
>> > Non-current versioned objects can be set to be removed after a specific
>> > amount of time since the point where they became non-current.
>> > As mentioned before, objects can be configured to transition to a different
>> > storage class (e.g., Amazon Glacier). It is possible to configure an
>> > object to be transitioned after a specific period, and after another
>> > period to be completely removed.
>> > When reading object information, it will specify when it is scheduled for
>> > removal. It is not yet clear to me whether an object can be accessed after
>> > that time, or whether it appears as gone immediately (either when trying
>> > to access it, or when listing the bucket).
>> > Rules cannot intersect. Each object cannot be affected by more than one
>> > rule.
>> >
>> > Swift provides a completely different object expiration system. In swift
>> > the expiration is set per object, and with an explicit time for it to be
>> > removed.
>> >
>> > In accordance with previous work, I'll currently focus on an S3
>> > implementation. We do not yet support object transition to a different
>> > storage class, so either we implement that first, or out first lifecycle
>> > implementation will not include that.
>>
>> While the tier transition seems like the most interesting part to me, it
>> shares some overlap with the current cache tiering ("move older objects to
>> an EC backend").  it also quickly snowballs (migrate object to a different
>> rados pool, from the same set you can pick when creating the bucket?
>> migrate to different zone/region?  migrate to an external service like
>> glacier?)
>>
>> Expiration sounds like a good first step...
>
> I agree. Thinking about migration, I don't think we're that far off. We'll add the ability to define new storage classes that will map to specified rados pools (or override existing placement targets -- storage policies?). The manifest will need to reflect where the object's data is located. We'll be able to copy the object into different storage classes, and it'll require a new service thread to do the migration. We might need to update the bucket index about the new object location.
>
>>
>> > 1. Lifecycle rules will be configured on the bucket instance info
>> >
>> > We hold the bucket instance info whenever we read an object, and it is
>> > cached. Since rules are configured to affect specific object prefixes, it
>> > will be quick and easy to determine whether an object is affected by any
>> > lifecycle rule.
>> >
>> > 2. New bucket index objclass operation to list objects that need to be
>> > expired / transitioned
>> >
>> > The operation will get the existing rules as input, and will return the
>> > list of objects that need to be handled. The request will be paged. Note
>> > that number of rules is constrained, so we only need to limit the number
>> > of returned entries.
>>
>> Will this be an O(n) scan over the index keys, or would we add some
>> time-based keys or something to make it faster?
>
> It will be a scan over index keys that match the prefixes, for each rule. Not sure if time based keys can help in any way as we'd need to intersect time and prefixes in order to make it worthwhile, but the prefixes are created willy nilly.

It is at least restricted to bucket indexes which are using
expiration, but I suspect a lot of those rules are going to be based
on the empty prefix and need to scan over the whole index. I'm not at
all clear on how expensive that is, but I'm thinking "very".
Unfortunately the best way to avoid that which I can think of is to
insert time-sorted expiration keys whenever we create an object along
with a "rule version", and bump that version whenever we create a
rule. Then when we do an expiration scan we could avoid doing the full
scan as long as the bucket's rule version isn't out of date — but if
it is we'd need to do a linear scan again. :/
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux