Radosgw Timeout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello List,

Using the radosgw works fine, as long as the amount of data doesn't get 
too big.

I have created one bucket that holds many small files, separated into 
different "directories". But whenever I try to acess the bucket, I only 
run into some timeout. The timeout is at around 30 - 100 seconds. This 
is smaller then the Apache timeout of 300 seconds.

I've tried to access the bucket with different clients - one thing is 
s3cmd - which still is able to upload things, but takes rather long 
time, when listing the contents.
Then I've  tried with s3fs-fuse - which throws
ls: reading directory .: Input/output error

Also Cyberduck and S3Browser show a similar behaivor.

Is there an option, to only send back maybe 1000 list entries, like 
Amazon das? So that the client might decide, if he want's to list all 
the contents?

Are there any timeout values in radosgw?

Any further thoughts, how I would increase performance on these listings?


Kind Regards,
Georg


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux