Hello Casey (and the ceph-users list), I am returning to my older problem to which you replied: Casey Bodley wrote: : There is a rgw_max_put_size which defaults to 5G, which limits the : size of a single PUT request. But in that case, the http response : would be 400 EntityTooLarge. For multipart uploads, there's also a : rgw_multipart_part_upload_limit that defaults to 10000 parts, which : would cause a 416 InvalidRange error. By default though, s3cmd does : multipart uploads with 15MB parts, so your 11G object should only : have ~750 parts. : : Are you able to upload smaller objects successfully? These : InvalidRange errors can also result from failures to create any : rados pools that didn't exist already. If that's what you're : hitting, you'd get the same InvalidRange errors for smaller object : uploads, and you'd also see messages like this in your radosgw log: : : > rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) : Numerical result out of range (this can be due to a pool or : placement group misconfiguration, e.g. pg_num < pgp_num or : mon_max_pg_per_osd exceeded) You are right. Now how do I know which pool it is and what is the reason? Anyway, If I try to upload a CentOS 7 ISO image using Perl module Net::Amazon::S3, it works. I do something like this there: my $bucket = $s3->add_bucket({ bucket => 'testbucket', acl_short => 'private', }); $bucket->add_key_filename("testdir/$dst", $file, { content_type => 'application/octet-stream' }) or die $s3->err . ': ' . $s3->errstr; and I see the following in /var/log/ceph/ceph-client.rgw....log: 2019-05-10 15:55:28.394 7f4b859b8700 1 civetweb: 0x558108506000: 127.0.0.1 - - [10/May/2019:15:53:50 +0200] "PUT /testbucket/testdir/CentOS-7-x86_64-Everything-1810.iso HTTP/1.1" 200 234 - libwww-perl/6.38 I can see the uploaded object using "s3cmd ls", and I can download it back using "s3cmd get", with matching sha1sum. When I do the same using "s3cmd put" instead of Perl module, I indeed get the pool create failure: 2019-05-10 15:53:14.914 7f4b859b8700 1 ====== starting new request req=0x7f4b859af850 ===== 2019-05-10 15:53:15.492 7f4b859b8700 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded) 2019-05-10 15:53:15.492 7f4b859b8700 1 ====== req done req=0x7f4b859af850 op status=-34 http_status=416 ====== 2019-05-10 15:53:15.492 7f4b859b8700 1 civetweb: 0x558108506000: 127.0.0.1 - - [10/May/2019:15:53:14 +0200] "POST /testbucket/testdir/c7.iso?uploads HTTP/1.0" 416 469 - - So maybe the Perl module is configured differently? But which pool or other parameter is the problem? I have the following pools: # ceph osd pool ls one .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data (the "one" pool is unrelated to RadosGW, it contains OpenNebula RBD images). Thanks, -Yenya : On 3/7/19 12:21 PM, Jan Kasprzak wrote: : > Hello, Ceph users, : > : >does radosgw have an upper limit of object size? I tried to upload : >a 11GB file using s3cmd, but it failed with InvalidRange error: : > : >$ s3cmd put --verbose centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso s3://mybucket/ : >INFO: No cache file found, creating it. : >INFO: Compiling list of local files... : >INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... : >INFO: Summary: 1 local files to upload : >WARNING: CentOS-7-x86_64-Everything-1810.iso: Owner username not known. Storing UID=108 instead. : >WARNING: CentOS-7-x86_64-Everything-1810.iso: Owner groupname not known. Storing GID=108 instead. : >ERROR: S3 error: 416 (InvalidRange) : > : >$ ls -lh centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso : >-rw-r--r--. 1 108 108 11G Nov 26 15:28 centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso : > : >Thanks for any hint how to increase the limit. : > : >-Yenya : > : _______________________________________________ : ceph-users mailing list : ceph-users@xxxxxxxxxxxxxx : http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> | | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 | sir_clive> I hope you don't mind if I steal some of your ideas? laryross> As far as stealing... we call it sharing here. --from rcgroups _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com