Re: rgw query bucket usage quickly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the tip.. It works now!

./s3curl.pl --id <theid> --
'http://<thergw>/admin/bucket?bucket=<thebucket>&stats' | jq .usage
{
  "rgw.main": {
    "size_kb": 14754527122,
    "size_kb_actual": 14787626688,
    "num_objects": 16556819
  }
}


-- Dan


On Fri, Jul 29, 2016 at 6:05 PM, Brian Felton <bjfelton@xxxxxxxxx> wrote:
> If s3curl has a major limitation, it's that you have to edit it to get
> signatures to match.  Specifically, the @endpoints list on or around line 31
> needs to be edited to include the endpoint you're calling to hit RGW (e.g.
> 192.168.100.100, my.ceph.cluster, etc.).  Once you add that, you should stop
> seeing the 403 responses from RGW.
>
> Brian
>
> On Fri, Jul 29, 2016 at 5:14 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx>
> wrote:
>>
>> On Fri, Jul 29, 2016 at 12:06 PM, Wido den Hollander <wido@xxxxxxxx>
>> wrote:
>> >
>> >> Op 29 juli 2016 om 11:59 schreef Dan van der Ster <dan@xxxxxxxxxxxxxx>:
>> >>
>> >>
>> >> Oh yes, that should help. BTW, which client are people using for the
>> >> Admin Ops API? Is there something better than s3curl.pl ...
>> >>
>> >
>> > I wrote my own client a while ago, but that's kind of buggy :)
>> >
>> > You might want to take a look at: https://github.com/dyarnell/rgwadmin
>> >
>>
>> Thanks, I'll have a look.
>> For some reason I'm too dense to get s3curl to work... always
>> SignatureDoesNotMatch.
>>
>> -- Dan
>>
>>
>> > Wido
>> >
>> >> -- Dan
>> >>
>> >>
>> >> On Thu, Jul 28, 2016 at 6:37 PM, Brian Andrus <bandrus@xxxxxxxxxx>
>> >> wrote:
>> >> > I'm not sure what mechanism is used, but perhaps the Admin Ops API
>> >> > could
>> >> > provide what you're looking for.
>> >> >
>> >> > http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage
>> >> >
>> >> > I believe also that the usage log should be enabled for the gateway.
>> >> >
>> >> > On Thu, Jul 28, 2016 at 12:19 PM, Sean Redmond
>> >> > <sean.redmond1@xxxxxxxxx>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> This seems pretty quick here on a jewel cluster here, But I guess
>> >> >> the key
>> >> >> questions is how large is large? Is it perhaps a large number of
>> >> >> smaller
>> >> >> files that is slowing this down? Is the bucket index shared / on
>> >> >> SSD?
>> >> >>
>> >> >> ====
>> >> >>
>> >> >> [root@korn ~]# time s3cmd du s3://seanbackup
>> >> >> 1656225129419 29 objects s3://seanbackup/
>> >> >>
>> >> >> real    0m0.314s
>> >> >> user    0m0.088s
>> >> >> sys     0m0.019s
>> >> >> [root@korn ~]#
>> >> >>
>> >> >>
>> >> >> On Thu, Jul 28, 2016 at 4:49 PM, Dan van der Ster
>> >> >> <dan@xxxxxxxxxxxxxx>
>> >> >> wrote:
>> >> >>>
>> >> >>> On Thu, Jul 28, 2016 at 5:33 PM, Abhishek Lekshmanan
>> >> >>> <abhishek@xxxxxxxx>
>> >> >>> wrote:
>> >> >>> >
>> >> >>> > Dan van der Ster writes:
>> >> >>> >
>> >> >>> >> Hi,
>> >> >>> >>
>> >> >>> >> Does anyone know a fast way for S3 users to query their total
>> >> >>> >> bucket
>> >> >>> >> usage? 's3cmd du' takes a long time on large buckets (is it
>> >> >>> >> iterating
>> >> >>> >> over all the objects?). 'radosgw-admin bucket stats' seems to
>> >> >>> >> know the
>> >> >>> >> bucket usage immediately, but I didn't find a way to expose that
>> >> >>> >> to
>> >> >>> >> end users.
>> >> >>> >>
>> >> >>> >> Hoping this is an easy one for someone...
>> >> >>> >
>> >> >>> > If swift api is enabled swift stat on the user account might
>> >> >>> > probably a quicker way.
>> >> >>>
>> >> >>> This user wants to be S3-only, due to their app being compatible
>> >> >>> with
>> >> >>> the big commercial cloud provider.
>> >> >>>
>> >> >>> Maybe s3cmd du is slow because the cluster is running hammer -- can
>> >> >>> any jewel users confirm it's still slow for large buckets on jewel?
>> >> >>>
>> >> >>> Cheers, Dan
>> >> >>> _______________________________________________
>> >> >>> ceph-users mailing list
>> >> >>> ceph-users@xxxxxxxxxxxxxx
>> >> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >>
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> ceph-users mailing list
>> >> >> ceph-users@xxxxxxxxxxxxxx
>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Brian Andrus
>> >> > Red Hat, Inc.
>> >> > Storage Consultant, Global Storage Practice
>> >> > Mobile +1 (530) 903-8487
>> >> >
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux