Re: Ceph Bucket strange issues rgw.none + id and marker diferent.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eric,

Yes we do :

time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list the bucket.

If we instantly hit again the query it normally timeouts.


Could you explain a little more "

With respect to your earlier message in which you included the output of `ceph df`, I believe the reason that default.rgw.buckets.index shows as
0 bytes used is that the index uses the metadata branch of the object to store its data.
"
I read in IRC today that in Nautilus release now is well calculated and no show more 0B. Is it correct?

Thanks for your response.


-----Mensaje original-----
De: J. Eric Ivancich <ivancich@xxxxxxxxxx> 
Enviado el: miércoles, 8 de mayo de 2019 21:00
Para: EDH - Manuel Rios Fernandez <mriosfer@xxxxxxxxxxxxxxxx>; 'Casey Bodley' <cbodley@xxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Asunto: Re:  Ceph Bucket strange issues rgw.none + id and marker diferent.

Hi Manuel,

My response is interleaved.

On 5/7/19 7:32 PM, EDH - Manuel Rios Fernandez wrote:
> Hi Eric,
> 
> This looks like something the software developer must do, not something than Storage provider must allow no?

True -- so you're using `radosgw-admin bucket list --bucket=XYZ` to list the bucket? Currently we do not allow for a "--allow-unordered" flag, but there's no reason we could not. I'm working on the PR now, although it might take some time before it gets to v13.

> Strange behavior is that sometimes bucket is list fast in less than 30 secs and other time it timeout after 600 secs, the bucket contains 875 folders with a total object number of 6Millions.
> 
> I don’t know how a simple list of 875 folder can timeout after 600 
> secs

Burkhard Linke's comment is on target. The "folders" are a trick using delimiters. A bucket is really entirely flat without a hierarchy.

> We bought several NVMe Optane for do 4 partitions in each PCIe card and get up 1.000.000 IOPS for Index. Quite expensive because we calc that our index is just 4GB (100-200M objects),waiting those cards. Any more idea?

With respect to your earlier message in which you included the output of `ceph df`, I believe the reason that default.rgw.buckets.index shows as
0 bytes used is that the index uses the metadata branch of the object to store its data.

> Regards

Eric

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux