On 05/30/2013 07:47 PM, Gregory Farnum wrote:
On Thu, May 30, 2013 at 10:42 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
Hi,
I was checking the source code today and found this macro:
#define RGW_MAX_PUT_SIZE (5ULL*1024*1024*1024)
Why is that limit in place? Was that to mimic Amazon S3? (Which is at 5T
now).
I know that object size limit something that should be there, but just
trying to find the reasoning behind this limit.
Couldn't we make this configurable at least?
That's the limit on an individual HTTP PUT operation. You can make
larger objects, but they need to be placed with multi-part uploads.
(I'm not sure what the actual limit is, if there is one.)
-Greg
Heh? My client gave me an error today and I thought that was really
doing multipart uploads.
Checking again I see that s3cmd doesn't do multipart. That confused me
So it doesn't seem there is an object size limitation after all, so you
can store object as large as you like.
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html