Re: RGW, future directions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 22, 2012 at 11:25 AM, Sławomir Skowron <szibis@xxxxxxxxx> wrote:
> On Tue, May 22, 2012 at 8:07 PM, Yehuda Sadeh <yehuda@xxxxxxxxxxx> wrote:
>> RGW is maturing. Beside looking at performance, which highly ties into
>> RADOS performance, we'd like to hear whether there are certain pain
>> points or future directions that you (you as in the ceph community)
>> would like to see us taking.
>>
>> There are a few directions that we were thinking about:
>>
>> 1. Extend Object Storage API
>>
>> Swift and S3 has some features that we don't currently support. We can
>> certainly extend our functionality, however, is there any demand for
>> more features? E.g., self destructing objects, web site, user logs,
>> etc.
>
> More compatibility with S3 and swift is good.

Any specific functional interest?

>
>>
>> 2. Better OpenStack interoperability
>>
>> Keystone support? Other?
>>
>> 3. New features
>>
>> Some examples:
>>
>>  - multitenancy: api for domains and user management
>>  - snapshots
>>  - computation front end: upload object, then do some data
>> transformation/calculation.
>>  - simple key-value api
>>
>> 4. CDMI
>>
>> Sage brought up the CDMI support question to ceph-devel, and I don't
>> remember him getting any response. Is there any intereset in CDMI?
>>
>>
>> 5. Native apache/nginx module or embedded web server
>>
>> We still need to prove that the web server is a bottleneck, or poses
>> scaling issues. Writing a correct native nginx module will require
>> turning rgw process model into event driven, which is not going to be
>> easy.
>>
>
> nginx module is nice thing.

It would be nice to have some concrete numbers as to where apache or
nginx with fastcgi holding us back, and how a dedicated module is
going to improve that. As a rule of thumb it is a no brainer, but
still we want to have a better understanding of the situation before
we dive into such a project.

>
>> 6. Improve garbage collection
>>
>> Currently rgw generates intent logs for garabage removal that require
>> running an external tool later, which is an administrative pain. We
>> can implement other solutions (OSD side garbage collection,
>> integrating cleanup process into the gateway, etc.) but we need to
>> understand the priority.
>
> crontab can handle this task for now, but in big workload, better if
> it's integrated, like scrub, and tuned via conf

Yeah. One of the original ideas was to leverage scrubbing for objects
expiration (issue #1994). The discussion never converged, as the devil
is as always in the details. We can revive that discussion.

>
>>
>> 7. libradosgw
>>
>> We have had this in mind for some time now. Creating a programming api
>> for rgw, not too different from librados and librbd. It'll hopefully
>> make code much cleaner. It will allow users to write different front
>> ends for the rgw backend, and it will make it easier for users to
>> write applications that interact with the backend, e.g., do processing
>> on objects that users uploaded, FUSE for rgw without S3 as an
>> intermediate, etc.
>>
>> 8. Administration tools improvement
>>
>> We can always do better there.
>>
>> 9. Other ideas?
>
> - I would like to see a feature that can make replication between
> clusters. For start 2 clusters. It's a very good feature, when you
> have two datacenters, and repliacation goes via hight speed link, but
> aplications at top of clusters does not need to handle this tasks, and
> data are consistent.

This is a more generic ceph issue. We do need, however, to be able to
support multiple clusters in rgw. Opened issue #2460.

>
> - gracefull upgrade

Can you elaborate on that? This is always our intention, we've put
some mechanisms in place to help with that, although it is not always
possible.

>
> - reload cluster config without restart daemons - or maybe this exist
> right now ??
>

I opened issue #2459.

Thanks,
Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux