Re: Living with huge bucket sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 9, 2017 at 2:21 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> Hi Bryan,
>
> On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell <bstillwell@xxxxxxxxxxx> wrote:
>> This has come up quite a few times before, but since I was only working with
>> RBD before I didn't pay too close attention to the conversation.  I'm
>> looking
>> for the best way to handle existing clusters that have buckets with a large
>> number of objects (>20 million) in them.  The cluster I'm doing test on is
>> currently running hammer (0.94.10), so if things got better in jewel I would
>> love to hear about it!
>> ...
>> Has anyone found a good solution for this for existing large buckets?  I
>> know sharding is the solution going forward, but afaik it can't be done
>> on existing buckets yet (although the dynamic resharding work mentioned
>> on today's performance call sounds promising).
>
> I haven't tried it myself, but 0.94.10 should have the (offline)
> resharding feature. From the release notes:
>

Right. We did add automatic dynamic resharding to Luminous, but
offline resharding should be enough.


>> * In RADOS Gateway, it is now possible to reshard an existing bucket's index
>> using an off-line tool.
>>
>> Usage:
>>
>> $ radosgw-admin bucket reshard --bucket=<bucket_name> --num_shards=<num_shards>
>>
>> This will create a new linked bucket instance that points to the newly created
>> index objects. The old bucket instance still exists and currently it's up to
>> the user to manually remove the old bucket index objects. (Note that bucket
>> resharding currently requires that all IO (especially writes) to the specific
>> bucket is quiesced.)

Once resharding is done, use the radosgw-admin bi purge command to
remove the old bucket indexes.

Yehuda

>
> -- Dan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux