Radosgw Timeout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 22, 2014 at 6:16 AM, Georg H?llrigl
<georg.hoellrigl at xidras.com> wrote:
> Hello List,
>
> Using the radosgw works fine, as long as the amount of data doesn't get too
> big.
>
> I have created one bucket that holds many small files, separated into
> different "directories". But whenever I try to acess the bucket, I only run
> into some timeout. The timeout is at around 30 - 100 seconds. This is
> smaller then the Apache timeout of 300 seconds.
>
> I've tried to access the bucket with different clients - one thing is s3cmd
> - which still is able to upload things, but takes rather long time, when
> listing the contents.
> Then I've  tried with s3fs-fuse - which throws
> ls: reading directory .: Input/output error
>
> Also Cyberduck and S3Browser show a similar behaivor.
>
> Is there an option, to only send back maybe 1000 list entries, like Amazon
> das? So that the client might decide, if he want's to list all the contents?


That how it works, it doesn't return more than 1000 entries at once.

>
> Are there any timeout values in radosgw?

Are you sure the timeout is in the gateway itself? Could be apache
that is timing out. Will need to see the apache access logs for these
operations, radosgw debug and messenger logs (debug rgw = 20, debug ms
= 1), to give a better answer.

Yehuda


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux