Re: radosgw (0.87) and multipart upload (result object size = 0)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 20, 2015 at 5:15 PM, Gleb Borisov <borisov.gleb@xxxxxxxxx> wrote:
> Hi,
>
> We're experiencing some issues with our radosgw setup. Today we tried to
> copy bunch of objects between two separate clusters (using our own tool
> built on top of java s3 api).
>
> All went smooth until we start copying large objects (200G+). We can see
> that our code handles this case correctly and started multipart upload
> (s3.initiateMultipartUpload), then it uploaded all the parts in serial mode
> (s3.uploadPart) and finally completed upload (s3.completeMultipartUpload).
>
> When we've checked consistency of two clusters we found that we have a lot
> of zero-sized objects (which turns to be our large objects).
>
> I've made more verbose log from radosgw:
>
> two requests (put_obj, complete_multipart) -
> https://gist.github.com/anonymous/840e0aee5a7ce0326368 (all finished with
> 200)
>
> radosgw-admin object stat output:
> https://gist.github.com/anonymous/2b6771bbbad3021364e2
>
> We've tried to upload these objects several times without any luck.
>
> # radosgw --version
> ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
>

It's hard to say much from these specific logs. Maybe if you could
provide some extra log that includes the http headers of the requests,
and also add 'debug ms = 1'.

Thanks,
Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux