On Fri, Jan 30, 2015 at 8:22 AM, Axel Dunkel <ad@xxxxxxxxx> wrote: > Hi, > > there are issues with radosgw and large file transfers without using > multiparts (like with "s3cmd --disable-multipart put") which seem to be > somehow known, but unsolved. > > Things run fine if the request takes no longer than 180sec. If it takes > longer, rgw_rest.cc (line 1236) gives the error "bad content length, > aborting" AFTER the request has been fully completed (so no timeout > issue). If the s3cmd runs less than 180sec things go through fine, if it > takes longer, the same command fails. > > This error is given if the variable Content-Length can not be parsed - is > it possible that some timer causes this variable to get corrupted? I did > not find any 180sec timer, though... > > This is with ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578), > Ubuntu 14.04. > That would happen if the connection is shut down for some reason (either client to apache tcp connection, or apache module to radosgw connection). This may very well be due to inactivity. At that point, rgw will get some kind of EOF status, and will verify that total received content is equal to the Content-Length field. If it's not equal you'd get that response. Are you, by any chance, using mod_fcgid as the fastcgi module (as opposed to mod_fastcgi, or mod_proxy_fcgi)? Yehuda -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html