bucket_index_max_shards vs. no resharding in multisite? How to brace RADOS for huge buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ceph-Users,

I just switched from a single to a multi-site setup with all sorts of bucket sizes and large differences in the number of stored objects.

Usually resharding is handled by RADOSGW automagically whenever a certain object count per shard is reached, 100k per default.
The functionality is nicely documented at:

  https://docs.ceph.com/en/octopus/radosgw/dynamicresharding/

Also mentioned there is that dynamic resharding is NOT possible in multisite environments. Apparently there are efforts to implement (?) multisite-resharding with Ceph 17 (Quincy) ...

https://tracker.ceph.com/projects/rgw/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=cf_3&op%5Bcf_3%5D=%3D&v%5Bcf_3%5D%5B%5D=multisite-reshard&f%5B%5D=&c%5B%5D=project&c%5B%5D=tracker&c%5B%5D=status&c%5B%5D=priority&c%5B%5D=subject&c%5B%5D=assigned_to&c%5B%5D=updated_on&c%5B%5D=category&c%5B%5D=fixed_version&c%5B%5D=cf_3&group_by=&t%5B%5D=


But how should I or do you handle ever growing buckets in the meantime?

While a larger number of index shards for all (new) buckets might come to mind and would avoid the described issues with too many objects  for a while, this also has long known issues and downsides:

 * http://cephnotes.ksperis.com/blog/2015/05/12/radosgw-big-index

Looking at my zones I can see that the master zone (converted from previously single-site setup) has

 bucket_index_max_shards=0

while the other, secondary zone has
 bucket_index_max_shards=11

Should I align this and use "11" as the default static number of shards for all new buckets then?
Maybe an even higher (prime) number just to be save?



Regards

Christian

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux