Re: Policy based object tiering in RGW

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sure. i was thinking, if this can be simplified using the existing
functionality in rados. But i agree, if we can write a better policy
engine and use the rados constructs to achieve the tiering would be
ideal to do.

Varada

On Tue, Apr 3, 2018 at 9:38 AM, Matt Benjamin <mbenjami@xxxxxxxxxx> wrote:
> I find it strange to be arguing for worse is better, but
>
> On Mon, Apr 2, 2018 at 11:34 PM, Varada Kari (System Engineer)
> <varadaraja.kari@xxxxxxxxxxxx> wrote:
>> Yes for internal data movement across pools. I am not too particular
>> about using the
>> current implemetation, if tiering V2 solves this better, will be
>> interested to use it.
>> The current problem is transferring object/bucket life cycles policies
>> to rados for moving the data around.
>
> The problem is simplified when RGW moves the data around within as
> well as across clusters.  As you note below...
>
>> I am not sure, if this needs a different policy engine at RGW layer,
>> to transcode these policies into tiering ops to move the data to a
>> different pool.
>> And we have to manage/indicate this object is moved to a different
>> pool and we have to bring it back or do a proxy read.
>> I am thinking mostly from the object life cycle management from RGW.
>>
>
> You want to support this anyway.
>
>>>
>>> Especially since you're discussing moving data across clusters, and
>>> RGW is already maintaining a number of indexes and things (eg, head
>>> objects), I think it's probably best to have RGW maintain metadata
>>> about the "real" location of uploaded objects.
>>> -Greg
>>>
>> As one more policy on the object, we can have archiving this object to
>> a different cluster. Here don't want to overload rados, but use RGW
>> cloud sync or multisite to sync this data to a different cluster.
>> When we starting integrating bucket/object policies to the life cycle
>> management and tiering, interesting to explore on how long i want to
>> it in the same pool or different pool or a different cluster.
>> Varada
>>>>
>
>
>
> --
>
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux