Re: Even more objects in a single bucket?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Harry,

When dynamic resharding was introduced for luminous, this limit on the number of bucket index shards was increased from 7877 to 65521. However, you're likely to have problems with bucket listing performance before you get to 7877 shards, because every listing request has to read from every shard of the bucket in order to produce sorted results.

If you can avoid listings entirely, indexless buckets are recommended. Otherwise, you can use our 'allow-unordered' extension to the s3 GET Bucket api [1] which is able to list one shard at a time for better scaling with shard count. Note that there was a bug [2] affecting this extension that was resolved for v12.2.13, v13.2.6, and v14.2.2.

[1] http://docs.ceph.com/docs/luminous/radosgw/s3/bucketops/#get-bucket

[2] http://tracker.ceph.com/issues/39393

On 6/17/19 11:00 AM, Harald Staub wrote:
There are customers asking for 500 million objects in a single object storage bucket (i.e. 5000 shards), but also more. But we found some places that say that there is a limit in the number of shards per bucket, e.g.

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_ubuntu/administration_cli

It says that the maximum number of shards is 7877. But I could not find this magic number (or any other limit) on http://docs.ceph.com.

Maybe this hard limit no longer applies to Nautilus? Maybe there is a recommended soft limit?

Background about the application: Veeam (veeam.com) is a backup solution for VMWare that can embed a cloud storage tier with object storage (only with a single bucket). Just thinking loud: Maybe this could work with an indexless bucket. Not sure how manageable this would be, e.g. to monitor how much space is used. Maybe separate pools would be needed.

 Harry
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux