Thanks Josh. Sharding the keys over many buckets does make sense, but then the question is over how many buckets? Every amazon user has a limit (1000), by default, on the number of buckets he can create. Is there any good reason for
restricting the number of buckets created by a user? Unfortunately, I couldn't find any documentation detailing the architecture of how rados gateway provides the Amazon S3 abstraction.
Moreover, is there any good practice on naming keys? We are inclined to use some uuid as a key and distribute keys across 256 buckets. I am just concerned about any long term repercussions of this design from
rados gateway's architectural perspective.
Aniket
restricting the number of buckets created by a user? Unfortunately, I couldn't find any documentation detailing the architecture of how rados gateway provides the Amazon S3 abstraction.
Moreover, is there any good practice on naming keys? We are inclined to use some uuid as a key and distribute keys across 256 buckets. I am just concerned about any long term repercussions of this design from
rados gateway's architectural perspective.
Aniket
On Mon, Sep 30, 2013 at 7:54 PM, Josh Durgin <josh.durgin@xxxxxxxxxxx> wrote:
You probably want to shard them over many buckets. See http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000595.htmlOn 09/29/2013 07:34 PM, Aniket Nanhe wrote:
Hi,
We have a Ceph cluster set up and are trying to evaluate Ceph for it's S3
compatible object storage. I came across this best practices document for
Amazon S3, which goes over how naming keys in a particular way can improve
performance of object GET and PUT operations (
http://aws.amazon.com/articles/1904/).
I wonder if this also applies to the object store in Ceph. I am also
curious about the best strategy to organize objects in buckets i.e. whether
it's a good idea to distribute objects to predefined number of buckets
(say for instance 256 or 1024 buckets) or it just doesn't matter how many
objects you put in a bucket (i.e. just put all objects in a single bucket).
We have objects of size ranging from 50KB to 10 MB.
Aniket M Nanhe
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com