Re: Content-length error uploading "big" files to radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 18, 2014 at 4:04 AM, Daniele Venzano <linux@xxxxxxxxxxxx> wrote:
> Hello,
>
> I have been trying to upload multi-gigabyte files to CEPH via the object
> gateway, using both the swift and s3 APIs.
>
> With file up to about 2GB everything works as expected.
>
> With files bigger than that I get back a "400 Bad Request" error, both
> with S3 (boto) and Swift clients.
>
> Enabling debug I can see this:
> 2014-12-18 12:38:28.947499 7f5419ffb700 20 CONTENT_LENGTH=3072000000
> ...
> 2014-12-18 12:38:28.947539 7f5419ffb700  1 ====== starting new request
> req=0x7f541000fee0 =====
> 2014-12-18 12:38:28.947556 7f5419ffb700  2 req 2:0.000017::PUT
> /test/test::initializing
> 2014-12-18 12:38:28.947581 7f5419ffb700 10 bad content length, aborting
> 2014-12-18 12:38:28.947641 7f5419ffb700  2 req 2:0.000102::PUT
> /test/test::http status=400
> 2014-12-18 12:38:28.947644 7f5419ffb700  1 ====== req done
> req=0x7f541000fee0 http_status=400 ======
>
>
> The content length is the right one (I created a test file with dd).
> With a file 2072000000 bytes long, I get no error.
>
> The gateway is running on debian, with the packages available on the
> ceph repo, version 0.87-1~bpo70+1. I am using standard apache (no 100
> continue).
>
> There is a limit on the object size? Or there is an error in my
> configuration somewhere?

You just stated it: you need 100-continue to upload parts larger than 2GB.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux