Hi,
This seems pretty quick here on a jewel cluster here, But I guess the key questions is how large is large? Is it perhaps a large number of smaller files that is slowing this down? Is the bucket index shared / on SSD?
====
[root@korn ~]# time s3cmd du s3://seanbackup
1656225129419 29 objects s3://seanbackup/
real 0m0.314s
user 0m0.088s
sys 0m0.019s
[root@korn ~]#
On Thu, Jul 28, 2016 at 4:49 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
On Thu, Jul 28, 2016 at 5:33 PM, Abhishek Lekshmanan <abhishek@xxxxxxxx> wrote:
>
> Dan van der Ster writes:
>
>> Hi,
>>
>> Does anyone know a fast way for S3 users to query their total bucket
>> usage? 's3cmd du' takes a long time on large buckets (is it iterating
>> over all the objects?). 'radosgw-admin bucket stats' seems to know the
>> bucket usage immediately, but I didn't find a way to expose that to
>> end users.
>>
>> Hoping this is an easy one for someone...
>
> If swift api is enabled swift stat on the user account might
> probably a quicker way.
This user wants to be S3-only, due to their app being compatible with
the big commercial cloud provider.
Maybe s3cmd du is slow because the cluster is running hammer -- can
any jewel users confirm it's still slow for large buckets on jewel?
Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com