Re: rgw bucket index manual copy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can I make existing bucket blind?

2016-09-22 4:23 GMT+03:00 Stas Starikevich <stas.starikevich@xxxxxxxxx>:
> Ben,
>
> Works fine as far as I see:
>
> [root@273aa9f2ee9f /]# s3cmd mb s3://test
> Bucket 's3://test/' created
>
> [root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test
> upload: '/etc/hosts' -> 's3://test/hosts'  [1 of 1]
>  196 of 196   100% in    0s   404.87 B/s  done
>
> [root@273aa9f2ee9f /]# s3cmd ls s3://test
>
> [root@273aa9f2ee9f /]# ls -al /tmp/hosts
> ls: cannot access /tmp/hosts: No such file or directory
>
> [root@273aa9f2ee9f /]# s3cmd get s3://test/hosts /tmp/hosts
> download: 's3://test/hosts' -> '/tmp/hosts'  [1 of 1]
>  196 of 196   100% in    0s  2007.56 B/s  done
>
> [root@273aa9f2ee9f /]# cat /tmp/hosts
> 172.17.0.4 273aa9f2ee9f
>
> [root@ceph-mon01 ~]# radosgw-admin bucket rm --bucket=test --purge-objects
> [root@ceph-mon01 ~]#
>
> [root@273aa9f2ee9f /]# s3cmd ls
> [root@273aa9f2ee9f /]#
>
>>>If not, i imagine rados could be used to delete them manually by prefix.
> That would be pain with more than few million objects :)
>
> Stas
>
> On Sep 21, 2016, at 9:10 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
>
> Thanks. Will try it out once we get on Jewel.
>
> Just curious, does bucket deletion with --purge-objects work via
> radosgw-admin with the no index option?
> If not, i imagine rados could be used to delete them manually by prefix.
>
>
> On Sep 21, 2016 6:02 PM, "Stas Starikevich" <stas.starikevich@xxxxxxxxx>
> wrote:
>>
>> Hi Ben,
>>
>> Since the 'Jewel' RadosGW supports blind buckets.
>> To enable blind buckets configuration I used:
>>
>> radosgw-admin zone get --rgw-zone=default > default-zone.json
>> #change index_type from 0 to 1
>> vi default-zone.json
>> radosgw-admin zone set --rgw-zone=default --infile default-zone.json
>>
>> To apply changes you have to restart all the RGW daemons. Then all newly
>> created buckets will not have index (bucket list will provide empty output),
>> but GET\PUT works perfectly.
>> In my tests there is no performance difference between SSD-backed indexes
>> and 'blind bucket' configuration.
>>
>> Stas
>>
>> > On Sep 21, 2016, at 2:26 PM, Ben Hines <bhines@xxxxxxxxx> wrote:
>> >
>> > Nice, thanks! Must have missed that one. It might work well for our use
>> > case since we don't really need the index.
>> >
>> > -Ben
>> >
>> > On Wed, Sep 21, 2016 at 11:23 AM, Gregory Farnum <gfarnum@xxxxxxxxxx>
>> > wrote:
>> > On Wednesday, September 21, 2016, Ben Hines <bhines@xxxxxxxxx> wrote:
>> > Yes, 200 million is way too big for a single ceph RGW bucket. We
>> > encountered this problem early on and sharded our buckets into 20 buckets,
>> > each which have the sharded bucket index with 20 shards.
>> >
>> > Unfortunately, enabling the sharded RGW index requires recreating the
>> > bucket and all objects.
>> >
>> > The fact that ceph uses ceph itself for the bucket indexes makes RGW
>> > less reliable in our experience. Instead of depending on one object you're
>> > depending on two, with the index and the object itself. If the cluster has
>> > any issues with the index the fact that it blocks access to the object
>> > itself is very frustrating. If we could retrieve / put objects into RGW
>> > without hitting the index at all we would - we don't need to list our
>> > buckets.
>> >
>> > I don't know the details or which release it went into, but indexless
>> > buckets are now a thing -- check the release notes or search the lists! :)
>> > -Greg
>> >
>> >
>> >
>> > -Ben
>> >
>> > On Tue, Sep 20, 2016 at 1:57 AM, Wido den Hollander <wido@xxxxxxxx>
>> > wrote:
>> >
>> > > Op 20 september 2016 om 10:55 schreef Василий Ангапов
>> > > <angapov@xxxxxxxxx>:
>> > >
>> > >
>> > > Hello,
>> > >
>> > > Is there any way to copy rgw bucket index to another Ceph node to
>> > > lower the downtime of RGW? For now I have  a huge bucket with 200
>> > > million files and its backfilling is blocking RGW completely for an
>> > > hour and a half even with 10G network.
>> > >
>> >
>> > No, not really. What you really want is the bucket sharding feature.
>> >
>> > So what you can do is enable the sharding, create a NEW bucket and copy
>> > over the objects.
>> >
>> > Afterwards you can remove the old bucket.
>> >
>> > Wido
>> >
>> > > Thanks!
>> > > _______________________________________________
>> > > ceph-users mailing list
>> > > ceph-users@xxxxxxxxxxxxxx
>> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux