Mark, Thanks for the update. Just an FYI I ran into an issue using the script when it turned out that the last part of the file was exactly 0 bytes. in length. For example: begin upload of root.img size 10737418240, 11 parts upload part 1 size 1073741824 upload part 2 size 1073741824 upload part 3 size 1073741824 upload part 4 size 1073741824 upload part 5 size 1073741824 upload part 6 size 1073741824 upload part 7 size 1073741824 upload part 8 size 1073741824 upload part 9 size 1073741824 upload part 10 size 1073741824 upload part 11 size 0 Traceback (most recent call last): File "/home/smiley/Downloads/s3-big.py", line 44, in <module> part.upload_part_from_file(fp = fp, part_num = n, size = size) File "/usr/local/lib/python2.7/dist-packages/boto/s3/multipart.py", line 240, in upload_part_from_file query_args=query_args, size=size) File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 946, in set_contents_from_file 'fp is at EOF. Use rewind option or seek() to data start.') AttributeError: fp is at EOF. Use rewind option or seek() to data start. I was testing the following file: -rw-r--r-- 1 root root 10737418240 Oct 29 10:36 root.img Thanks again, Shain Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649 ________________________________________ From: Mark Kirkwood [mark.kirkwood@xxxxxxxxxxxxxxx] Sent: Wednesday, October 30, 2013 8:29 PM To: derek@xxxxxxxxxxxxxx; Shain Miley; ceph-users@xxxxxxxx Subject: Re: Radosgw and large files Along those lines, you might want to use something similar to the attached to check for any failed/partial uploads that are taking up space (note cannot be gc'd away automatically). I just got caught by this. In fact the previous code I posted should probably use a try:... except: block to cancel the upload if the program needs to abort - but it is still possible to get failed uploads for other reasons, so it probably still useful to have something to find any! Cheers Mark On 28/10/13 18:04, Mark Kirkwood wrote: > I was looking at the same thing myself, and Boto seems to work ok > (tested a 6G file - some sample code attached). > > Regards > > Mark > > On 27/10/13 11:46, Derek Yarnell wrote: >> Hi Shain, >> >> Yes we have tested and have working S3 Multipart support for files >5GB >> (RHEL64/0.67.4). >> >> However, crossftp unless you have pro it would seem does not support >> multipart. Dragondisk gives the error that I have seen when using a PUT >> and not multipart, EntityTooLarge. My guess is that it is not doing >> Multipart. Now from s3cmd's source it does support multipart have you >> tried to give it a --debug 2 which will output boto's debug output I >> believe. That should tell you more about if it is correctly doing a >> multipart upload. >> >> Thanks, >> derek >> >> On 10/26/13, 5:10 PM, Shain Miley wrote: >>> Hi, >>> >>> I am wondering if anyone has successfully been able to upload files >>> larger than 5GB using radosgw. >>> >>> I have tried using various clients, including dragondisk, crossftp, >>> s3cmd, etc...all of them have failed with a 'permission denied' >>> response. >>> >>> Each of the clients say they support multi-part (and it appears that >>> they do as files larger than 15MB gets split into multiple pieces) >>> however I can only upload files smaller than 5GB up to this point. >>> >>> I am using 0.67.4 and Ubuntu 12.04. >>> >>> Thanks in advance, >>> >>> Shain >>> >>> Shain Miley | Manager of Systems and Infrastructure, Digital Media | >>> smiley@xxxxxxx | 202.513.3649 >>> >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >> > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com